-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus receiver and exporter don't handle multiple targets with the same metric. #2216
Comments
I had the same issue, it's mainly because the receiver doesn't add labels for different jobs / instances (related to #2363) Like this : - job_name: windows-exporter
static_configs:
- targets: ['localhost:9182']
relabel_configs:
# Trick because otel collector not expose the job
- action: replace
replacement: windows-exporter
target_label: job_name Result exemple :
Be carefull for an unknown reason for now, dont use |
We now include instance and job labels, so this issue shouldn't be present anymore. |
…open-telemetry#2216) * Bump github.com/itchyny/gojq from 0.12.4 to 0.12.5 in /internal/tools Bumps [github.com/itchyny/gojq](https://github.com/itchyny/gojq) from 0.12.4 to 0.12.5. - [Release notes](https://github.com/itchyny/gojq/releases) - [Changelog](https://github.com/itchyny/gojq/blob/main/CHANGELOG.md) - [Commits](itchyny/gojq@v0.12.4...v0.12.5) --- updated-dependencies: - dependency-name: github.com/itchyny/gojq dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * Auto-fix go.sum changes in dependent modules Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: MrAlias <[email protected]>
Describe the bug
If the same metric exists in multiple prometheus receiver targets, only one of the metrics appears in the prometheus exporter endpoint.
Steps to reproduce
Run the opentelemetry collector with the configuration below in a kubernetes cluster with 2 or more nodes. Use the
kubelet_running_pod_count
metric to demonstrate the issue, although it applies to all metrics.The logs of the opentelemetry collector include the metric in question for all nodes, which shows that the scraping is succeeding for all targets:
What did you expect to see?
If I curl the prometheus endpoint, I should see one metric stream for that metric for each node in my cluster.
What did you see instead?
I only see a single metric stream.
What version did you use?
Version: f583f6e
What config did you use?
Config (processors omitted):
Environment
OS: COS from Google. 4.19.112 linux kernel
Compiler(if manually compiled): go 1.15
Additional context
The prometheus receiver calls ConsumeMetrics on the sink for each Commit() call in the prometheus receiver. In practice, this seems to occur once for each scrape target.
The prometheus exporter keeps a map[descriptor]metric (from orijtech/prometheus-go-metrics-exporter), and overwrites the metric for a descriptor each time data for that descriptor is "exported" to it. This means if multiple scrape targets omit metrics with the same descriptor (name + labels), only the target that was written last, will show up in the endpoint, as it will overwrite previous scrapes.
It seems like there are two possible solutions:
a. Note that we would also have to address the memory leak this would cause with a TTL or something for each time series so they aren't kept around indefinitely.
The text was updated successfully, but these errors were encountered: