Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[prometheusreceiver] timestamp enforcement breaks federation support #14453

Closed
rmfitzpatrick opened this issue Sep 22, 2022 · 5 comments
Closed
Labels
bug Something isn't working closed as inactive priority:p2 Medium receiver/prometheus Prometheus receiver Stale

Comments

@rmfitzpatrick
Copy link
Contributor

rmfitzpatrick commented Sep 22, 2022

What happened?

Description

When scraping a federated endpoint built-in internal metrics appear to conflict with the receiver maintained ones and lead to:

2022-09-22T20:11:53.740Z        warn    scrape/scrape.go:1279   Appending scrape report failed  {"kind": "receiver", "name": "prometheus/federation", "pipeline": "metrics", "scrape_pool": "federation", "target": "http://192.168.77.27:9090/federate?match%5B%5D=%7Bjob%3D%22nginx%22%7D", "error": "inconsistent timestamps on metric points for metric up"}

Steps to Reproduce

Try to scrape a /federate endpoint of a prometheus server*:

scrape_configs:
  - job_name: federation
    honor_labels: true
    metrics_path: /federate
    params:
      'match[]':
         - '{job="nginx"}'

Expected Result

Successful metric scraping and translation

Actual Result

Above appender error.

I believe this was introduced by #9385 as earlier collector versions work w/ federated scraping.

Collector version

0.54.0

@rmfitzpatrick rmfitzpatrick added bug Something isn't working needs triage New item requiring triage labels Sep 22, 2022
@dashpole dashpole added the receiver/prometheus Prometheus receiver label Sep 23, 2022
@evan-bradley evan-bradley added priority:p2 Medium and removed needs triage New item requiring triage labels Sep 26, 2022
@rmfitzpatrick
Copy link
Contributor Author

rmfitzpatrick commented Sep 27, 2022

I think the issue as reported could be a bit of an edge case and this occurs when there's a minimal set of labels for both* the federated server and the prometheus receiver. Adding a global external label to the federated server will prevent the metric group collision but I think this masks what I think I'm seeing as the underlying issue.

This may be something for https://github.com/open-telemetry/wg-prometheus/issues and I'm relatively new to this receiver but I think it's undesirable for instance, job, and __metrics_path__ to be considered unnecessary for internal metrics, so I took a crack* at resolving this issue by including them to not mask their origin: #14555

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Nov 28, 2022
@dashpole dashpole removed the Stale label Nov 28, 2022
@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jan 30, 2023
@dashpole dashpole removed the Stale label Jan 30, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Apr 3, 2023

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions
Copy link
Contributor

github-actions bot commented Jun 2, 2023

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working closed as inactive priority:p2 Medium receiver/prometheus Prometheus receiver Stale
Projects
None yet
Development

No branches or pull requests

3 participants