-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus Receiver - honor_labels set to true doesnt work with federation #5757
Comments
Running into the same issue, is anyone facing this aware of a workaround? |
@dashpole are you looking at this issue? We can also help so let me know. |
I am not currently working on it, but am happy to help if someone else wants to take it on |
AFAICT, the issue is that the metric metadata is associated only with target labels, prior to merging them with individual metric labels, and the receiver, when it is given a metric to append, doesn't have access to the original value of I think we might be able to do something with automatically appending relabeling rules that would put the original target |
There is a different google project that uses a fork of the prometheus server. They dealt with this by embedding a metadata fetcher in the context passed to Appender (which be here in the prom receiver). I wonder if upstream prometheus maintainers would be amenable to such a solution... |
I've run into this while trying to test my fix for #8355 and sent an upstream PR: prometheus/prometheus#10450 If the upstream PR is merged it should remove a lot of complexity from the collector codebase :) |
This fixes #5757 and #5663 Signed-off-by: Goutham Veeramachaneni <[email protected]>
* Use target and metadata from context This fixes #5757 and #5663 Signed-off-by: Goutham Veeramachaneni <[email protected]> * Add tests for relabeling working Signed-off-by: Goutham Veeramachaneni <[email protected]> * Use Prometheus main branch prometheus/prometheus#10473 has been merged Signed-off-by: Goutham Veeramachaneni <[email protected]> * Add back the tests Signed-off-by: Goutham Veeramachaneni <[email protected]> * Fix flaky test Signed-off-by: Goutham Veeramachaneni <[email protected]> * Add Changelog entry Signed-off-by: Goutham Veeramachaneni <[email protected]> * Add relabel test with the e2e framework Signed-off-by: Goutham Veeramachaneni <[email protected]> * Update receiver/prometheusreceiver/metrics_receiver_labels_test.go Co-authored-by: Anthony Mirabella <[email protected]> * Move changelog entry to unreleased Signed-off-by: Juraci Paixão Kröhling <[email protected]> * Make lint pass Needed to run make gotidy; make golint strings.Title is deprecated Signed-off-by: Goutham Veeramachaneni <[email protected]> Co-authored-by: Anthony Mirabella <[email protected]> Co-authored-by: Juraci Paixão Kröhling <[email protected]>
I think this issue is still happening on helm version 0.60.0 |
Hii, did you find a workaround? |
No, not yet. I'm using open-telemetry exclusively for collecting traces. For metrics, I moved to a different solution. |
Describe the bug
When using federation, honor_labels set to true doesnt work since the source prometheus server's job name is not found in the list of targets in the destination prometheus config
Steps to reproduce
Using federation, try to scrape metrics from another prometheus server with honor_labels set to true.
What did you expect to see?
Metrics collected correctly by the otel collector with the labels from the source prometheus server honored
What did you see instead?
An error that says "job or instance cannot be found from labels"
What version did you use?
Version: v0.27.0
What config did you use?
Config: (e.g. the yaml config file)
Environment
OS: Ubuntu 20.04
Compiler(if manually compiled): go 1.14
Additional context
This seems to be related to the way targets are looked up based on job/instance labels. Related to #5663
The text was updated successfully, but these errors were encountered: