-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose metrics using go-metrics #683
Comments
Sounds reasonable to me. I would start with a few easy ones as a proof of concept (for example bytes per producer request). Some of the others (such as ms spent in channels) would require tracking much more internal timing information, so I'd prefer to start with something less invasive. |
Sounds good, I'll try to work on a PR so we can discuss on the details. |
- add MetricRegistry configuration parameter that defaults to metrics.DefaultRegistry - provide the following metrics: - incoming-byte-rate meter (global and per registered broker) - request-rate meter (global and per registered broker) - request-size histogram (global and per registered broker) - outgoing-byte-rate meter (global and per registered broker) - response-rate meter (global and per registered broker) - response-size histogram (global and per registered broker) - batch-size histogram (global and per topic) - record-send-rate meter (global and per topic) - records-per-request histogram (global and per topic) - compression-rate histogram (global and per topic) - add metrics flag to kafka-console-producer to output metrics - validate metrics in functional_producer_test
- add MetricRegistry configuration parameter that defaults to metrics.DefaultRegistry - provide the following metrics: - incoming-byte-rate meter (global and per registered broker) - request-rate meter (global and per registered broker) - request-size histogram (global and per registered broker) - outgoing-byte-rate meter (global and per registered broker) - response-rate meter (global and per registered broker) - response-size histogram (global and per registered broker) - batch-size histogram (global and per topic) - record-send-rate meter (global and per topic) - records-per-request histogram (global and per topic) - compression-rate histogram (global and per topic) - add metrics flag to kafka-console-producer to output metrics - validate metrics in functional_producer_test
maybe out of scope here but prometheus is also a popular metrics sink and being able to configure for either go-metrics or prometheus metrics would be useful |
From what I have read, Prometheus provides a time series database and client integration. I believe that it is possible to publish metrics from go-metrics into Prometheus using their go integration but at the cost of losing data precision when using something else than a gauge or a counter. It is an issue for Dropwizard's metrics library as well that might get tackle in future version as far as I know: Richard Crowley's go-metrics library provides a simple yet powerful abstraction to publish metrics to most popular time series databases. What do you think @eapache? |
go-metrics provides the metrics abstraction I was kind of expecting. Unless someone provides a compelling argument that direct Prometheus integration provides substantial benefit I can't see it being worth the effort. |
May I suggest defaulting to something like Of course, this is first and foremost on me for both not pinning Sarama's version and relying on Although I understand it would be a breaking change to change this behavior now that it has landed. |
This is documented behavior: Using the global registry allows for easy access to the metrics and is the sensible default when using go-metrics but we provide a way to use a custom one which is probably what you ended up doing: That being said this new feature is not part of an official release yet so changing it would not break backward compatibility, at least for people relying on |
I don't have a strong opinion here, but Sébastien is correct that we can still make breaking changes until I push a new major version with metrics. We only provide API stability guarantees between tagged versions. I suspect it would be safer to default to a custom registry, on the same principle that libraries should not e.g. register global command-line flags or log to the global logger by default. |
I created #744 to switch to a local registry. |
I believe this is now effectively done, please re-open if you still have work you want to track here. |
Sounds good, will submit separate PR for new metrics if necessary but the existing are really handy for monitoring and tuning producers. Thanks for merging that feature @eapache. |
Versions
Sarama Version: v1.9.0
Kafka Version: *
Go Version: *
Problem Description
On the contrary to the Java client, no internal metrics are provided in order to monitor or optimize the performances (e.g. latency, throughput) like:
Some of those metrics are keys for tuning a Kafka producer over long fat networks or just benchmarking a consumer without relying on the broker metrics.
I was thinking of exposing a few useful metrics to the producer through Richard Crowley's Go port of Coda Hale's Metrics library.
That library do not have transitive dependencies but provides built-in stats reporter (Graphite, OpenTSDB) as well as third party ones (e.g. InfluxDB) and can be disable using
UseNilMetrics
variable.The idea would be to add a new public
MetricRegistry Registry
field to theConfig
struct that would defaults tometrics.DefaultRegistry
and enrich it with metrics using similar name that the official documentation.What do you think, any ideas on which metrics needs to be exposed first?
The text was updated successfully, but these errors were encountered: