Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus Receiver - honor_labels set to true doesnt work with federation #5757

Closed
rashmichandrashekar opened this issue Oct 14, 2021 · 10 comments · Fixed by #8780
Closed
Labels
bug Something isn't working comp:prometheus Prometheus related issues comp: receiver Receiver

Comments

@rashmichandrashekar
Copy link
Contributor

rashmichandrashekar commented Oct 14, 2021

Describe the bug
When using federation, honor_labels set to true doesnt work since the source prometheus server's job name is not found in the list of targets in the destination prometheus config

Steps to reproduce
Using federation, try to scrape metrics from another prometheus server with honor_labels set to true.

What did you expect to see?
Metrics collected correctly by the otel collector with the labels from the source prometheus server honored

What did you see instead?
An error that says "job or instance cannot be found from labels"

What version did you use?
Version: v0.27.0

What config did you use?
Config: (e.g. the yaml config file)

global:
  evaluation_interval: 5s
  scrape_interval: 5s
scrape_configs:
- job_name: federate
  honor_labels: true
  metrics_path: '/federate'
  params:
    'match[]': ['job:my_job_name']
  static_configs:
    - targets:
      - 'prometheus-prometheus-oper-prometheus.monitoring:9090'

Environment
OS: Ubuntu 20.04
Compiler(if manually compiled): go 1.14

Additional context
This seems to be related to the way targets are looked up based on job/instance labels. Related to #5663

@rashmichandrashekar rashmichandrashekar added the bug Something isn't working label Oct 14, 2021
@rashmichandrashekar
Copy link
Contributor Author

cc : @dashpole - this seems to be related to the same issue as - #5663

@edude03
Copy link

edude03 commented Nov 13, 2021

Running into the same issue, is anyone facing this aware of a workaround?

@alolita alolita added comp: receiver Receiver comp:prometheus Prometheus related issues labels Nov 17, 2021
@alolita
Copy link
Member

alolita commented Feb 28, 2022

@dashpole are you looking at this issue? We can also help so let me know.

@dashpole
Copy link
Contributor

I am not currently working on it, but am happy to help if someone else wants to take it on

@Aneurysm9
Copy link
Member

AFAICT, the issue is that the metric metadata is associated only with target labels, prior to merging them with individual metric labels, and the receiver, when it is given a metric to append, doesn't have access to the original value of instance and job if they have been overridden by labels in the exposition that are honored.

I think we might be able to do something with automatically appending relabeling rules that would put the original target job and instance label values in a new label with a name constructed to avoid conflicts. We could then use that value to look up the metadata and drop the label when converting to pdata. This might run into issues if label limits are set up, though.

@dashpole
Copy link
Contributor

There is a different google project that uses a fork of the prometheus server. They dealt with this by embedding a metadata fetcher in the context passed to Appender (which be here in the prom receiver). I wonder if upstream prometheus maintainers would be amenable to such a solution...

@gouthamve
Copy link
Member

I've run into this while trying to test my fix for #8355 and sent an upstream PR: prometheus/prometheus#10450

If the upstream PR is merged it should remove a lot of complexity from the collector codebase :)

gouthamve referenced this issue in gouthamve/opentelemetry-collector-contrib Apr 5, 2022
This fixes #5757 and #5663

Signed-off-by: Goutham Veeramachaneni <[email protected]>
jpkrohling added a commit that referenced this issue Apr 5, 2022
* Use target and metadata from context

This fixes #5757 and #5663

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Add tests for relabeling working

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Use Prometheus main branch

prometheus/prometheus#10473 has been merged

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Add back the tests

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Fix flaky test

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Add Changelog entry

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Add relabel test with the e2e framework

Signed-off-by: Goutham Veeramachaneni <[email protected]>

* Update receiver/prometheusreceiver/metrics_receiver_labels_test.go

Co-authored-by: Anthony Mirabella <[email protected]>

* Move changelog entry to unreleased

Signed-off-by: Juraci Paixão Kröhling <[email protected]>

* Make lint pass

Needed to run make gotidy; make golint

strings.Title is deprecated

Signed-off-by: Goutham Veeramachaneni <[email protected]>

Co-authored-by: Anthony Mirabella <[email protected]>
Co-authored-by: Juraci Paixão Kröhling <[email protected]>
@Utwo
Copy link

Utwo commented Jun 21, 2023

I think this issue is still happening on helm version 0.60.0

@shacharSirotkin
Copy link

I think this issue is still happening on helm version 0.60.0

Hii, did you find a workaround?

@Utwo
Copy link

Utwo commented Jan 31, 2024

I think this issue is still happening on helm version 0.60.0

Hii, did you find a workaround?

No, not yet. I'm using open-telemetry exclusively for collecting traces. For metrics, I moved to a different solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working comp:prometheus Prometheus related issues comp: receiver Receiver
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants