Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mac + Spring Boot 3.0: Connection Reset by Peer #33911

Open
DyelamosD opened this issue Jul 4, 2024 · 5 comments
Open

Mac + Spring Boot 3.0: Connection Reset by Peer #33911

DyelamosD opened this issue Jul 4, 2024 · 5 comments
Labels
bug Something isn't working Stale

Comments

@DyelamosD
Copy link

Component(s)

No response

What happened?

Description

I cannot, under any circumstance, talk to any of the endpoints or ports within the otel-collector, I'm always getting connection reset by peer. I've tried to walk the image back a few months and I'm still getting the same issue. The logs are completely empty. When I curl I always get connection reset by peer as well. I can't ssh into the container because I think it's a go container.

I think I'm doing something wrong, and that maybe this could be a point to improve docs.

I am using the latest imagine for the otel-collector-contrib with the following docker compose:

version: '3.6'
services:  
  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    extra_hosts: [ 'host.docker.internal:host-gateway' ]
    command: [ "--config=/etc/otel-collector-config.yaml" ]
    volumes:
      - ./docker/otel-collector/otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
    ports:
      - "43175:4317"  # OTLP gRPC
      - "43185:4318"  # OTLP HTTP
      - "55679:55679" # health
    networks:
      - backend-dev
 
  grafana:
    image: grafana/grafana
    extra_hosts: [ 'host.docker.internal:host-gateway' ]
    volumes:
      - ./docker/grafana/provisioning/datasources:/etc/grafana/provisioning/datasources:ro
      - ./docker/grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards:ro
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
    ports:
      - "3090:3000"
    networks:
      - backend-dev
      
networks:
  backend-dev:

Steps to Reproduce

docker-compose-up

send any trace via any application to the open gRPC or http ports or try to connect via browser or curl any of the 43175, or 43185 or 55679.

Expected Result

Any http response from the server

Actual Result

Connection reset by peer

Collector version

6c936660d90b2e15307a63761a2ee9333bd39ac419d45f67fd5d30d5ea9ac267, 0.101.0, 0.103.1

Environment information

Environment

OS: MacOs 13.4.1 (c) on an M2 Pro chip

OpenTelemetry Collector configuration

receivers:
  otlp:
    protocols:
      grpc:
      http:
        cors:
          allowed_origins:
            - http://*
            # Origins can have wildcards with *, use * by itself to match any origin.
            - https://*

exporters:
  coralogix:
    # The Coralogix traces ingress endpoint
    traces:
      endpoint: "<REDACTED>"

    # Your Coralogix private key is sensitive
    private_key: "<REDACTED>"
    application_name: "BE"
    subsystem_name: "BE demo test"
    timeout: 60s

extensions:
  health_check:
  zpages:
    endpoint: :55679

processors:
  batch/traces:
    timeout: 1s
    send_batch_size: 50
  batch/metrics:
    timeout: 60s
  resourcedetection:
    detectors: [ env, docker ]
    timeout: 5s
    override: true

service:
  pipelines:
    traces:
      receivers: [ otlp ]
      processors: [ batch/traces ]
      exporters: [ coralogix ]

Log output

2024-07-04 10:37:16 2024-07-04T09:37:16.112Z    info    [email protected]/service.go:115 Setting up own telemetry...
2024-07-04 10:37:16 2024-07-04T09:37:16.114Z    info    [email protected]/telemetry.go:96        Serving metrics {"address": ":8888", "level": "Normal"}
2024-07-04 10:37:16 2024-07-04T09:37:16.125Z    info    [email protected]/service.go:193 Starting otelcol-contrib...     {"Version": "0.104.0", "NumCPU": 12}
2024-07-04 10:37:16 2024-07-04T09:37:16.125Z    info    extensions/extensions.go:34     Starting extensions...
2024-07-04 10:37:16 2024-07-04T09:37:16.129Z    info    [email protected]/otlp.go:102       Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "localhost:4317"}
2024-07-04 10:37:16 2024-07-04T09:37:16.137Z    info    [email protected]/otlp.go:152       Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "localhost:4318"}
2024-07-04 10:37:16 2024-07-04T09:37:16.138Z    info    [email protected]/service.go:219 Everything is ready. Begin running and processing data.

Additional context

No response

@DyelamosD DyelamosD added bug Something isn't working needs triage New item requiring triage labels Jul 4, 2024
@breezeblock3d
Copy link

I'm getting the same issue with otel/opentelemetry-collector-contrib version 0.105.0 (Image ID: d85af9079167)

If I run it as-is with default values, it works as expected. I can successfully send traces & see it appear in the collector's stdout. I'm also able to connect to the zPages extension through my browser.

As soon as I try to start it up through a docker-compose file and use a custom configuration, even the ones from the documentation, I get the same startup logs, with no indication of any error, but any attempt to reach the collector results in error: Connection reset by peer

@jaywagnon
Copy link

jaywagnon commented Aug 2, 2024

We were having a similar connection reset issue on our M1/M2 Macs and the collector through docker-compose. We found we had to explicitly bind to 0.0.0.0 in the custom configuration:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

After that, we were able to connect without the reset error. This was with the 0.106.1 tag (which was latest on Aug 2nd).

That said, this output in the collector logs is probably relevant and may mean you'll want to take a different course than above:

The default endpoints for all servers in components have changed to use localhost instead of 0.0.0.0.

We're just starting with OTel, so changing the endpoint for testing was most expedient.

@crazyscientist
Copy link

crazyscientist commented Aug 28, 2024

Hi there, I'm not using a Mac, but Ubuntu 22.04 on a Dell with an Intel Core i7, and I had the exact same issue when using docker-compose:

The collector refused connections unless the IP address was explicitly set.

@rapar8
Copy link

rapar8 commented Sep 5, 2024

I'm also having same issue with both otel-collector and otel-collector-contrib under docker container
I have published my issue here under java instrumentation

it doesn't matter if you use 0.0.0.0 or localhost or docker container ip, application can not talk to otel-collector . In my case I'm using otel java agent.

otel-collector | 2024-09-05T20:31:49.646Z info localhostgate/featuregate.go:63 The default endpoints for all servers in components have changed to use localhost instead of 0.0.0.0. Disable the feature gate to temporarily revert to the previous default. {"feature gate ID": "component.UseLocalHostAsDefaultHost"}

Copy link
Contributor

github-actions bot commented Dec 2, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

@github-actions github-actions bot added the Stale label Dec 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

6 participants