-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically generated metrics and labels by prometheus are dropped by the prometheus receiver #2363
Comments
@alolita If these labels and metrics are being dropped intentionally then is it fine if I file a PR to make it config driven to not block these when a particular config is set? |
In fact the receiver doesn't drop In my use case, I would at least expose the job, to avoid metrics conflicts (for example go_memstats_alloc_bytes is common to almost all exporters, so the value collected by otel collector make no sense without job) and to continue to use "standard" grafana dashboards. Most of them ar based on job name. For now my workaround is : receivers:
prometheus:
config:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 5s
static_configs:
- targets: ['otel-collector:8888']
relabel_configs:
- action: replace
replacement: otel-collector
target_label: job
- job_name: node-exporter
static_configs:
- targets: ['node-exporter:9100']
relabel_configs:
- action: labelmap
regex: __(.+)
# Trick because otel collector not expose the job
- action: replace
replacement: node-exporter
target_label: job
- job_name: es-exporter
static_configs:
- targets: ['es-exporter:9114']
relabel_configs:
# Trick because otel collector not expose the job
- action: replace
replacement: es-exporter
target_label: job By this way I federate 3 exporters on OTEL Collector and I expose all metrics with the prometheus exporter in only one endpoint. What could be great :
EDIT: In another task, it could be usefull to append automaticaly the metric |
For information, the native job adding is done here : https://github.com/prometheus/prometheus/blob/df80dc4d3970121f2f76cba79050983ffb3cdbb0/scrape/target.go#L328 |
In fact... I'm wrong about something in my previous assumption 🤔 |
…2897) In Prometheus, `job` and `instance` are the two auto generated labels, however they are both dropped by prometheus receiver. Although these information is still available in `service.name` and `host`:`port`, it breaks the data contract for most Prometheus users (who use `job` and `instance` to consume metrics in their own system). This PR adds `job` and `instance` as well-known labels in prometheus receiver to fix the issue. **Link to tracking Issue:** #575 #2499 #2363 open-telemetry/prometheus-interoperability-spec#7
Duplicate of open-telemetry/prometheus-interoperability-spec#37. |
…orter (#2979) This is a follow up to #2897. Fixes #575 Fixes #2499 Fixes #2363 Fixes open-telemetry/prometheus-interoperability-spec#37 Fixes open-telemetry/prometheus-interoperability-spec#39 Fixes open-telemetry/prometheus-interoperability-spec#44 Passing compliance tests: $ go test --tags=compliance -run "TestRemoteWrite/otelcollector/Job.+" -v ./ === RUN TestRemoteWrite === RUN TestRemoteWrite/otelcollector === RUN TestRemoteWrite/otelcollector/JobLabel === PAUSE TestRemoteWrite/otelcollector/JobLabel === CONT TestRemoteWrite/otelcollector/JobLabel --- PASS: TestRemoteWrite (10.02s) --- PASS: TestRemoteWrite/otelcollector (0.00s) --- PASS: TestRemoteWrite/otelcollector/JobLabel (10.02s) PASS ok github.com/prometheus/compliance/remote_write 10.382s $ go test --tags=compliance -run "TestRemoteWrite/otelcollector/Instance.+" -v ./ === RUN TestRemoteWrite === RUN TestRemoteWrite/otelcollector === RUN TestRemoteWrite/otelcollector/InstanceLabel === PAUSE TestRemoteWrite/otelcollector/InstanceLabel === CONT TestRemoteWrite/otelcollector/InstanceLabel --- PASS: TestRemoteWrite (10.01s) --- PASS: TestRemoteWrite/otelcollector (0.00s) --- PASS: TestRemoteWrite/otelcollector/InstanceLabel (10.01s) PASS ok github.com/prometheus/compliance/remote_write 10.291s $ go test --tags=compliance -run "TestRemoteWrite/otelcollector/RepeatedLabels.+" -v ./ === RUN TestRemoteWrite === RUN TestRemoteWrite/otelcollector --- PASS: TestRemoteWrite (0.00s) --- PASS: TestRemoteWrite/otelcollector (0.00s) testing: warning: no tests to run PASS
This attribute is temporarily required to be set to make Splunk OTel Collector Helm chart to work on GKE/Autopilot.
Describe the bug
Labels and metrics that are automatically generated by prometheus are dropped by the prometheus receiver. Example - metric
up and scrape_duration_seconds (few other scape_ metrics)
and labelsjob and instance
. Is this intentional or a potential bug?Steps to reproduce
I set up a pipeline with prometheus receiver and prometheusremotewrite. Config file is provided below.
What did you expect to see?
I expected to see the metrics and labels mentioned here
What did you see instead?
Above metrics and labels were dropped.
What version did you use?
Version: 0.17.0
What config did you use?
Additional context
I briefly looked into the code and I observed below
The text was updated successfully, but these errors were encountered: