-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporter/datadogexporter] EC2MetadataError: failed to make EC2Metadata request #22807
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
same issue here |
Not sure if it's the same issue exactly but we too see the same error when running the collector on a non EKS cluster i.e. GKE. We do not use the datadog exporter at all in this particular collector deployment (seen on receivers:
k8sobjects:
auth_type: serviceAccount
objects: # ...
processors:
resourcedetection:
detectors:
- env
- gcp
- eks
- ec2
- azure
- system
timeout: 2s
override: false
system:
resource_attributes:
host.id:
enabled: false
batch: # ...
memory_limiter: # ...
# ...
extensions: # ...
exporters:
logging: # ...
otlp: # ...
service:
telemetry: # ...
extensions: # ...
pipelines:
logs:
receivers:
- k8sobjects
processors:
- resourcedetection
- memory_limiter
- batch
# ...
exporters:
- logging
# ... I've ran the collector with debug logs and we can see (I assume as much at least) that this is getting triggered in the resourcedetection processor. Additionally, it breaks the "promise" of collector telemetry logs in JSON format 😅: Collector debug logs
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This seems to be coming from the AWS SDK in https://github.com/aws/aws-sdk-go/blob/394d04f7e36b85532cede3eb815a6a23413b2eaa/aws/ec2metadata/token_provider.go#L68 - this part of the code does not respect the logging decision (since for the AWS client, logging should be off by default). Since in some settings (we have also noticed this in GKE Autopilot users) that call can lead to Besides trying to fix this in the upstream, we could override the logger with a custom AWS logger that would just discard any logs (and thus be true to the "logging off" level that should be by default). Similarly to other providers, I think the EC2 detector should still fail silently (or with debug log) in case we cannot obtain the metadata (e.g. because we're not running on EC2). |
This will be resolved by merging #30341, since it has already been fixed in the upstream |
Component(s)
exporter/datadog
What happened?
Description
I get this warning message when the collector starts:
I'm not running on AWS so I don't understand why that warning is raised.
Steps to Reproduce
I can't reproduce it locally
Collector version
v0.78.0
Environment information
Environment
OS: Google Cloud Platform (GKE autopilot 1.24.11-gke.1000)
OpenTelemetry Collector configuration
Log output
Additional context
I am not 100% sure if this warning comes from datadog exporter, but my suspicions point to it. If not, feel free to close it.
The text was updated successfully, but these errors were encountered: