Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mac + Spring Boot 3.0: Connection Reset by Peer #33911

Open
DyelamosD opened this issue Jul 4, 2024 · 2 comments
Open

Mac + Spring Boot 3.0: Connection Reset by Peer #33911

DyelamosD opened this issue Jul 4, 2024 · 2 comments
Labels
bug Something isn't working needs triage New item requiring triage

Comments

@DyelamosD
Copy link

Component(s)

No response

What happened?

Description

I cannot, under any circumstance, talk to any of the endpoints or ports within the otel-collector, I'm always getting connection reset by peer. I've tried to walk the image back a few months and I'm still getting the same issue. The logs are completely empty. When I curl I always get connection reset by peer as well. I can't ssh into the container because I think it's a go container.

I think I'm doing something wrong, and that maybe this could be a point to improve docs.

I am using the latest imagine for the otel-collector-contrib with the following docker compose:

version: '3.6'
services:  
  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    extra_hosts: [ 'host.docker.internal:host-gateway' ]
    command: [ "--config=/etc/otel-collector-config.yaml" ]
    volumes:
      - ./docker/otel-collector/otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
    ports:
      - "43175:4317"  # OTLP gRPC
      - "43185:4318"  # OTLP HTTP
      - "55679:55679" # health
    networks:
      - backend-dev
 
  grafana:
    image: grafana/grafana
    extra_hosts: [ 'host.docker.internal:host-gateway' ]
    volumes:
      - ./docker/grafana/provisioning/datasources:/etc/grafana/provisioning/datasources:ro
      - ./docker/grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards:ro
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
    ports:
      - "3090:3000"
    networks:
      - backend-dev
      
networks:
  backend-dev:

Steps to Reproduce

docker-compose-up

send any trace via any application to the open gRPC or http ports or try to connect via browser or curl any of the 43175, or 43185 or 55679.

Expected Result

Any http response from the server

Actual Result

Connection reset by peer

Collector version

6c936660d90b2e15307a63761a2ee9333bd39ac419d45f67fd5d30d5ea9ac267, 0.101.0, 0.103.1

Environment information

Environment

OS: MacOs 13.4.1 (c) on an M2 Pro chip

OpenTelemetry Collector configuration

receivers:
  otlp:
    protocols:
      grpc:
      http:
        cors:
          allowed_origins:
            - http:https://*
            # Origins can have wildcards with *, use * by itself to match any origin.
            - https://*

exporters:
  coralogix:
    # The Coralogix traces ingress endpoint
    traces:
      endpoint: "<REDACTED>"

    # Your Coralogix private key is sensitive
    private_key: "<REDACTED>"
    application_name: "BE"
    subsystem_name: "BE demo test"
    timeout: 60s

extensions:
  health_check:
  zpages:
    endpoint: :55679

processors:
  batch/traces:
    timeout: 1s
    send_batch_size: 50
  batch/metrics:
    timeout: 60s
  resourcedetection:
    detectors: [ env, docker ]
    timeout: 5s
    override: true

service:
  pipelines:
    traces:
      receivers: [ otlp ]
      processors: [ batch/traces ]
      exporters: [ coralogix ]

Log output

2024-07-04 10:37:16 2024-07-04T09:37:16.112Z    info    [email protected]/service.go:115 Setting up own telemetry...
2024-07-04 10:37:16 2024-07-04T09:37:16.114Z    info    [email protected]/telemetry.go:96        Serving metrics {"address": ":8888", "level": "Normal"}
2024-07-04 10:37:16 2024-07-04T09:37:16.125Z    info    [email protected]/service.go:193 Starting otelcol-contrib...     {"Version": "0.104.0", "NumCPU": 12}
2024-07-04 10:37:16 2024-07-04T09:37:16.125Z    info    extensions/extensions.go:34     Starting extensions...
2024-07-04 10:37:16 2024-07-04T09:37:16.129Z    info    [email protected]/otlp.go:102       Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "localhost:4317"}
2024-07-04 10:37:16 2024-07-04T09:37:16.137Z    info    [email protected]/otlp.go:152       Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "localhost:4318"}
2024-07-04 10:37:16 2024-07-04T09:37:16.138Z    info    [email protected]/service.go:219 Everything is ready. Begin running and processing data.

Additional context

No response

@breezeblock3d
Copy link

I'm getting the same issue with otel/opentelemetry-collector-contrib version 0.105.0 (Image ID: d85af9079167)

If I run it as-is with default values, it works as expected. I can successfully send traces & see it appear in the collector's stdout. I'm also able to connect to the zPages extension through my browser.

As soon as I try to start it up through a docker-compose file and use a custom configuration, even the ones from the documentation, I get the same startup logs, with no indication of any error, but any attempt to reach the collector results in error: Connection reset by peer

@jaywagnon
Copy link

jaywagnon commented Aug 2, 2024

We were having a similar connection reset issue on our M1/M2 Macs and the collector through docker-compose. We found we had to explicitly bind to 0.0.0.0 in the custom configuration:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

After that, we were able to connect without the reset error. This was with the 0.106.1 tag (which was latest on Aug 2nd).

That said, this output in the collector logs is probably relevant and may mean you'll want to take a different course than above:

The default endpoints for all servers in components have changed to use localhost instead of 0.0.0.0.

We're just starting with OTel, so changing the endpoint for testing was most expedient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs triage New item requiring triage
Projects
None yet
Development

No branches or pull requests

3 participants