-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to translate metric from otel-python histogram using prometheusexporter #13443
Comments
code example temporality_cumulative = {
Counter: AggregationTemporality.CUMULATIVE,
UpDownCounter: AggregationTemporality.CUMULATIVE,
Histogram: AggregationTemporality.CUMULATIVE,
}
exporter = OTLPMetricExporter(insecure=True, preferred_temporality=temporality_cumulative)
reader = PeriodicExportingMetricReader(
exporter,
export_interval_millis=15000,
)
provider = MeterProvider(metric_readers=[reader])
set_meter_provider(provider)
...
self.histogram = meter.create_histogram(
"graphql.api.request.time",
unit="ms",
description="Request time metrics for GraphQL API.",
) if this is an issue with the python SDK please let me know :) |
Note the error message |
oh interesting! How can a map contain two entries for the same key? Either way, the issue seems to be that no data is associated with the second histogram point. Definitely some better error messages would be helpful though. |
Maps are (for performance reasons) represented as lists in the OTLP protocol; and the duplicate is meant to be resolved by taking the last value, which is how protobuf defines the situation for maps as well. There's something incorrect about the data type here, but I wouldn't want to guess how come. |
Pinging code owners: @Aneurysm9. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I wanted to mention, in the case it may help in any way, that i'm also seeing this If this should be reported through a new issue, or there's any other details y'all may be interested in let me know. Here's an example of the error:
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Any update on this? Still getting this error when using a GRPC Revicer and a Prometheus Exporter
|
I am having the same issue with statsd receiver and Prometheus exporter. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This is still an issue, as reported in #26725 for v0.85.0 of the collector. |
I believe I have found the cause of this issue. looks like prometheus exporter doesn't support exponential histograms. |
Hi! I've upgraded to v0.86.0 collector contrib version. Still having this issue:
I'm happing to give additional information if required. |
Another user has hit this in #25146. |
I am using gauges and I am running into the same issue. |
I'm getting the same with opentelemetry-collector:0.91.0 and Node.js with @opentelemetry/* latest (1.18.1/0.45.1) when trying to use an exponential histogram (
|
I am also seeing this problem with several different metrics. Tried upgrading to 0.91.0 and observing the following on local docker deployments as well as Kubernetes deployments on AWS/EKS..
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
getting same error with @mdio |
Same here! |
Yup I have realized the same at least for python. I got confused by this talk https://www.youtube.com/watch?v=W2_TpDcess8 where they say that they have it implemented, although I was only able to find such implementation in go here: https://github.com/open-telemetry/opentelemetry-go-contrib/blob/6e79f7ca22d58345e2ccfe3f9e8bb4ad71633ab3/bridges/prometheus/producer.go#L170C6-L170C33. It seems that the python exporter does not have such implementation: https://github.com/open-telemetry/opentelemetry-python/blob/main/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py |
Seeing this error message:
And it looks like this is due to a histogram metric being reported by the
PeriodicExportingMetricReader
exporting my histogram metric which hasn't had any data reported in the past period. Below is the output from a file exporter for some more debugging information. I believe this will be fixed by #9006 but let me know if that is incorrect.Metrics dump
The text was updated successfully, but these errors were encountered: