-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheusremotewrite empty metrics error #10364
Comments
I tried this with codegen based receivers and get the same error when in an environment where no metrics can be recorded. |
It looks like collector cannot reach the prometheus server. Can you confirm the exporter configured correctly and the prom server is accessible from there? |
Thanks for the response @dmitryax. So the error we get from the
In the promethuesremotewrite exporter code you can see this error occurs in the batching before it attempts to send the data. We do also get some metrics that are being collected by other receivers (that do have data) that are also exported via the same |
btw this can be reproduced easily using your latest distributions... Using the v0.52.0 release binary and the following config (I'm excluding all my running docker images to get 0 metrics): extensions:
health_check:
receivers:
docker_stats:
endpoint: unix:///var/run/docker.sock
collection_interval: 2s
timeout: 20s
api_version: 1.24
excluded_images:
- prom/prometheus
- grafana/grafana
- openzipkin/zipkin
- daprio/dapr:1.7.2
- redis
provide_per_core_cpu_metrics: true
processors:
exporters:
prometheusremotewrite:
endpoint: "http://127.0.0.1:8080/api/v1/push"
service:
telemetry:
logs:
level: debug
extensions: [health_check]
pipelines:
metrics:
receivers: [docker_stats]
processors: []
exporters: [prometheusremotewrite] This results in the following log:
This config uses the |
@dmitryax if you can confirm this is a bug I'd be happy to create a PR to "fix" this. Is there a reason the |
Fixes #10364 Signed-off-by: Goutham Veeramachaneni <[email protected]>
Fixes #10364 Signed-off-by: Goutham Veeramachaneni <[email protected]> Signed-off-by: Goutham Veeramachaneni <[email protected]> Co-authored-by: Bogdan Drutu <[email protected]>
Describe the bug
When a receiver scrape returns an empty metric and the prometheusremotewrite exporter is enabled, the exporter errors instead of just dropping the metric. When retriers are enabled, these errors block valid metrics getting exported and cause our entire system to degrade. If I disable retries, the other valid metrics get through ok but then we get the error
Exporting failed: Try enabling retry_on_failure config option to retry on retryable errors from queued_retry.go
.The following scrapers return empty metrics when there is nothing to scrape which results in this issue:
podmanreceiver
dockerstatsreceiver
Steps to reproduce
Use either
podmanreceiver
ordockerstatsreceiver
in an environment where there are no containers.What did you expect to see?
Empty metrics are dropped and an error is not reported that affects the exporting of valid metrics.
What did you see instead?
Errors from the
remotewriteexporter
that cause other valid metrics to not be exported correctly due to the intensive retries.What version did you use?
v0.52
The text was updated successfully, but these errors were encountered: