-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus exporter metrics' labels missing #48
Comments
Hi, thanks for trying out flowpipeline! Regarding your first two issues, have you checked whether the actual annotation is done to your liking? I.e., are the missing fields populated in the output of the json segment, for instance? Anyhow, I'll be rechecking its implementation. Regarding the issue of exporter cache clearing: Prometheus exporter guidelines say that one should not worry about clearing. This is of course not 100% applicable in our case, where label cardinality can explode very easily. When we implemented the segment, we thought about that, but in the scenarios we envisioned the prometheus segment being used, the stream is already either very tightly filtered, from a very specific interface, or not meant to be running permanently. Generally, I'd say Prometheus is not a good match for large scale and high cardinality flow keeping. What do you think would be an appropriate way of clearing the exporter cache? Tracking of individual counter update times and clearing every x minutes without activity? Regarding the vague documentation of the remoteaddress segment, there is not only https://github.com/bwNetFlow/flowpipeline/blob/master/CONFIGURATION.md#remoteaddress but also the more detailed https://pkg.go.dev/github.com/bwNetFlow/flowpipeline/segments/modify/remoteaddress linked from CONFIGURATION.md. |
Thank you for your explanation and help. After a bit more testing and trial, I have come to some conclusions that you may find useful:
When it comes to clearing cache: for small and medium size projects, when Kafka is an overkill, a solution that is based solely on Prometheus isn't as bad an idea as it might seem. Especially if you substitute the proper Prometheus with VictoriaMetrics. The savings in system resources appears to be significant. The simplest clearing strategy that I can think of could be based on fixed time intervals that would allow flushing already scraped metrics with Prometheus scrape interval. Tracking the counter is an option as well. |
Thanks for getting back on this.
|
|
/flowdata metrics don't seem to be most reliable. Direction label always indicates "Incoming" (regardless of "- segment: remoteaddress" configuration and actual state; btw, documentation is a bit vague on this one), protoname is always empty, as well as remotecountry (again, regardless of "- segment: geolocation" configuration, with database provided). Furthermore, correct me if I'm wrong, but shouldn't /flowdata metrics be periodically flushed/cleared? There's no point in scraping old flow data again and again, and without that, the amount of records to scrape quickly grows to quite a significant number, esp. if we keep record of ports.
It's also quite possible that none of the above are actual bugs/issues, and I missed some important piece of documentation, so in that case please push me in the right direction.
The text was updated successfully, but these errors were encountered: