-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[receiver/prometheus] update github.com/prometheus/prometheus to v0.49.1 breaks build #30883
Comments
Pinging code owners for receiver/prometheus: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
PR: #30881 |
The It's a trivial change for the receiver to handle a returned error, I'm not familiar enough to know how much effort is required to pass in a prometheus.Registerer variable. Prometheus change that caused issue: prometheus/prometheus@5752050 |
Long story, but i'm working on updating the prometheus dependency: open-telemetry/opentelemetry-collector-contrib#30934 As part of that update, we need to adapt to a change that makes the prometheus servers' self-observability metrics independent. See prometheus/prometheus#13507 and prometheus/prometheus#13610 One way to adapt to this change is by adding a label to each receivers' metrics to differentiate one Prometheus receiver from another. I've tried taking that approach in open-telemetry/opentelemetry-collector-contrib#30934, but the current issue is that the NoOp components all have the same name, which causes the self-observability metrics to collide. I can work around this in the prometheus receiver's own tests, but I can't work around the issue in the `generated_component_test.go` tests, since those are generated. This PR makes the ID returned by `NewNopCreateSettings` unique by giving it a unique name. **Link to tracking Issue:** Part of open-telemetry/opentelemetry-collector-contrib#30883 cc @Aneurysm9 --------- Co-authored-by: Alex Boten <[email protected]>
Long story, but i'm working on updating the prometheus dependency: open-telemetry/opentelemetry-collector-contrib#30934 As part of that update, we need to adapt to a change that makes the prometheus servers' self-observability metrics independent. See prometheus/prometheus#13507 and prometheus/prometheus#13610 One way to adapt to this change is by adding a label to each receivers' metrics to differentiate one Prometheus receiver from another. I've tried taking that approach in open-telemetry/opentelemetry-collector-contrib#30934, but the current issue is that the NoOp components all have the same name, which causes the self-observability metrics to collide. I can work around this in the prometheus receiver's own tests, but I can't work around the issue in the `generated_component_test.go` tests, since those are generated. This PR makes the ID returned by `NewNopCreateSettings` unique by giving it a unique name. **Link to tracking Issue:** Part of open-telemetry/opentelemetry-collector-contrib#30883 cc @Aneurysm9 --------- Co-authored-by: Alex Boten <[email protected]>
…30934) Fixes #30883 Prometheus PR we need to adapt to: prometheus/prometheus#12738 The configuration option we were using to enable protobuf has been removed, so this PR removes the `enable_protobuf_negotiation` configuration on the prometheus receiver. In its place, the prometheus server now has a config option, `scrape_protocols`, which specifies which protocols it uses, including protobuf. Use `config.global.scrape_protocols = [ PrometheusProto, OpenMetricsText1.0.0, OpenMetricsText0.0.1, PrometheusText0.0.4 ]` to enable protobuf scraping instead of using our option. Prometheus PR we need to adapt to: prometheus/prometheus#12958 We now need to pass a prometheus.Registerer to the scrape manager to collect metrics about scrapes. This PR disables the collection of these scrape metrics from the prometheus server. We did not use the default registry, so we were previously collecting these metrics and never exposing them. This PR passes a no-op registerer to prevent using memory to store those metrics. In the future, we can consider using the prometheus -> OTel bridge to export these with the self-observability pipeline.
Component(s)
receiver/promethues
What happened?
See https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/7715094549/job/21028827561?pr=30881
Collector version
mainline
Environment information
No response
OpenTelemetry Collector configuration
No response
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: