-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[telemetry] Simplify disabling of prometheus endpoint for internal telemetry metrics #10919
Comments
The plot thickens. It seems EDIT: Initially this comment contained some wrong assumptions, I cleaned it up now: Additionally, when I disable the Prometheus exporter by setting |
The warning will now show up by default with the featuregate moving to stable in #11091
This is a bug, will spend some time investigating it |
I tested this via the current version of the collector with the config:
The prometheus exporter is disabled as expected and the OTLP exporter emitted these:
|
Note that the export of the metrics via OTLP happened over two different batches, not sure if that's important to your tests @pirgeo |
@pirgeo was able to reproduce a similar behaviour as what you described by programmatically disabling the prometheus exporter configured via |
submitted a fix for the bug in #11093 |
I think setting the address to null is not a great user experience. In the sense that I wouldn't expect having to set a readers configuration for my prometheus exporter and also to set null to a second value. This feels confusing.
Similarly to the
Maybe this one would be the easiest one from the standpoint of a user. If I want to use
I would vote to do this and to introduce a feature gate that removes |
Ah, I think I just ran a second collector with the debug exporter and maybe didn't wait long enough to see whether data would show up. I'll try it again. Thanks for taking a look! |
This bug caused proctelemetry metrics to not be registered if a user configured the Collector's internal telemetry via `readers` only and disabled `address`. The check in the `if` statement is no longer needed since a no-op meter provider will be configured unless the telemetry level is set. Mentioned in #10919 --------- Signed-off-by: Alex Boten <[email protected]>
As it is now possible to export internal telemetry metrics via OTLP, scraping the Prometheus metrics from the collector is no longer the only way to get that data. Since users will likely export via both export paths, they might want to disable the Prometheus endpoint.
Have a look at the instantiation code today. You have two ways of disabling the Prometheus endpoint:
service: telemetry: metrics: level: none
. However, this will immediately return anoop
MeterProvider and thus disable all exporting (including OTLP-based exporting).service: telemetry: metrics: address: ""
(to an empty string). The code checks whether the string in the address is longer than 0 characters. However, having to know that only the empty string disables the additional Prometheus exporter feels unintuitive to me and makes the collector config confusing to read if you don't know exactly why you need to specify an empty string.Describe the solution you'd like
If another exporter is explicitly defined, the Prometheus exporter is automatically disabled. However, it can be explicitly added alongside the OTLP exporter if required.
Describe alternatives you've considered
I am opening this issue to discuss possible solutions. Here are a few I came up with
null
explicitly should disable the Prom exporter"none"
or"disabled"
should turn off the exporter. I don't like this option, though, as it uses magic strings and is not intuitive.service: telemetry: metrics: address:
configuration option and requiring Prometheus exporters to be explicitly set up so they follow the same structure as the OTLP exporters. This could break backward compatibility, though, so we might need to introduce a feature gate.The text was updated successfully, but these errors were encountered: