-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
label dimensions inconsistent with previously collected metrics #23
Comments
Found a fix for this. Made changes similar to this: This is working now by removing |
Yes, this can happen if the same metric is submitted with different label sets. Is this something that happens persistently? Could it be avoided by coupling the exporter lifetime to the lifetime of the source (e.g. in Kubernetes, I would run them in one pod)? That's generally the best way to run exporters and avoids issues if you add or remove a label in your code. |
I labeled this "enhancement" because I don't think the current behavior is a bug per se. Whether we should support this use case is still to be discussed. |
I had a use-case where I was trying to export data from some legacy influxdb systems into prometheus. Would be hard to change the configs elsewhere. |
I have a same problem:
Metrics are sent from Proxmox: |
I'm going to close this because of age. We've updated the Prometheus client library since then, which changes some behaviors. In general I'm okay with being relaxed about label dimensions. |
I'm getting the following error when I try
localhost:9122/metrics
Guess this is because of empty or conflicting values?
The text was updated successfully, but these errors were encountered: