-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheusreceiver and statsdreceiver behave differently in terms of setting "OTelLib" when awsemfexporter is used #24298
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I think the reason for this may be difference in implementation. vs. this implementation in You can see the objects/types on which the @paologallinaharbur, you seem to be the author of both of those implementations, can you please take a look and/or comment on the issue? Also: We're also testing the behaviour of https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver with |
I've just realized, |
@paologallinaharbur, I managed to set up local workspace and debug the tests, and I saw exactly what you're showing on screenshots. Which led me to the fact, that it's not the implementation, but simply an older version of the dependency in Anyway, thanks a lot for looking into that (and apologies for wasting your time). |
Component(s)
exporter/awsemf, receiver/prometheus, receiver/statsd
What happened?
Description
We have
aws-otel-collector
0.30.0
running alongside a Java App (which exposes Prometheus metrics) and AWS/Envoy Sidecar (which exposes StatsD metrics).aws-otel-collector
is configured to process both those sources using separate pipelines, and to push the metrics to AWS CloudWatch usingawsemfexporter
. We have previously used version0.16.1
of theaws-otel-collector
and are only now upgraging.Previously, metrics from both sources were stored in CloudWatch "as-is". After the upgrade, however, we noticed, that the Prometheus metrics gained a new Dimension:
OTelLib
, with valueotelcol/prometheusreceiver
. This, obviously broke a few things on our end (like CloudWatch Alarms).After digging a bit, I found this two tickets, which were supposed to get both of these receivers to the same place in terms of populating
otel.library.name
:Unfortunately I was not able to grasp how that translates to
OTelLib
metric dimension set inawsemfexporter
but it seems somehow related at this point.My understanding is, that it's de-facto standard for the receivers to add the name and version of the library to processed metrics, but I do not understand how or why at all is that information being added as a dimension. I also do not understand if that's an expected outcome, thus, it's hard for me to figure out whether it's a bug in
prometheusreceiver
(that it adds that as a dimension),statsdreceiver
(that it doesn't add it as a dimension) orawsemfexporter
. I'd be grateful for any guidance on this matter.Steps to Reproduce
Expected Result
I would expect the following:
awsemfexporter
would add the newOTelLib
Dimension regardless where the metrics come from. Or would not add that at all. I'm not sure what is considered the "correct" behaviour here. I would expect it to be consistent across receivers, however.awsemfexporter
configuration, it has dedicated logic to handle thatOTelLib
Dimension. I think it would be a good idea to be able to implement a switch that would control whether theOTelLib
Dimension is being added or not. In our case, forcefully adding this new Dimension to all collected metrics will break A LOT of things around our observability solution.Actual Result
prometheusreceiver
are stored byawsemfexporter
with additionalOTelLib
dimension set tootelcol/prometheusreceiver
.statsdreceiver
are stored by identical configuration ofawsemfexporter
withoutOTelLib
dimension.awsemfexporter
in a way that it would not add theOTelLib
dimension.Collector version
v0.78.0 (according to: https://github.com/aws-observability/aws-otel-collector/releases/tag/v0.30.0)
Environment information
Environment
OS: AWS ECS / Fargate
We're running custom-built Docker Image, based on
amazonlinux:2
, with a Dockerfile lookling like below:OpenTelemetry Collector configuration
Log output
Additional context
N/A
The text was updated successfully, but these errors were encountered: