Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v.0.41.x version of aws-otel-collector incorporates Open Telemetry Collector Bug: https://github.com/open-telemetry/opentelemetry-collector/issues/11239 #2881

Open
pawelkaliniakit opened this issue Oct 21, 2024 · 2 comments

Comments

@pawelkaliniakit
Copy link

Describe the bug
Version 0.41.x of aws-otel-collector has following dependency:

OpenTelemetry Collector dependencies to v1.15.0/v0.109.0

and the dependency v0.109.0 has following issue:
open-telemetry/opentelemetry-collector#11239

that has been solved in v.110.0 of the dependency

Steps to reproduce
Try to scrap prometheus endpoint with the v.0.41.x aws-otel-collector

What did you expect to see?
No issues in collector log while scrapping the prometheus endpoint

What did you see instead?

2024/10/16 10:19:16 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader (resp_writer_wrapper.go:78)

in logs of the collector

Environment

part of config

  prometheus/something:
    config:
      global:
        scrape_interval: 5m
        scrape_timeout: 10s
      scrape_configs:
        - job_name: "something-something"
          static_configs:
            - targets: [0.0.0.0:9465]
@stazz
Copy link

stazz commented Dec 10, 2024

FWIW, it seems that this bug has been fixed in otel-collector v.1.32.0 🤔

@colinbjohnson
Copy link

colinbjohnson commented Jan 26, 2025

Note that this appears to be fixed in public.ecr.aws/aws-observability/aws-otel-collector:v0.42.0 but I wanted to provide instructions for reproducing/testing the bug anyway.


To reproduce this issue in either public.ecr.aws/aws-observability/aws-otel-collector:v0.41.2 or public.ecr.aws/aws-observability/aws-otel-collector:v0.41.1 you can do the following:

# create an otel-agent-config.yaml file
cat > otel-agent-config.yaml <<EOF
services:
  aws-ot-collector:
    image: public.ecr.aws/aws-observability/aws-otel-collector:v0.41.2
    command: ["--config=/etc/otel-agent-config.yaml"]
    volumes:
      - ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
    ports:
      - 4318:4318
EOF
cat > docker-compose.yaml <<EOF
services:
  aws-ot-collector:
    image: public.ecr.aws/aws-observability/aws-otel-collector:v0.41.2
    command: ["--config=/etc/otel-agent-config.yaml"]
    volumes:
      - ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
    ports:
      - 4318:4318
EOF
# and run docker
docker compose up

Once the OTEL container is running:

curl http://localhost:4318

Even though this is not an actual metric send this will create the given bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants