Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NO_PROXY env var doesn't work after upgrade collector #10456

Closed
diecgia opened this issue Jun 21, 2024 · 5 comments
Closed

NO_PROXY env var doesn't work after upgrade collector #10456

diecgia opened this issue Jun 21, 2024 · 5 comments
Labels
bug Something isn't working exporter/otlp

Comments

@diecgia
Copy link

diecgia commented Jun 21, 2024

Component(s)

exporter/otlp

What happened?

Description

After upgrading opentelemetry-collector-contrib from version 0.81.0 to 0.101.0, otlp/elastic exporter is failing because it tries to send traces through the proxy set on env vars, but no_proxy var is also set for elastic URL, so it shouldn't use proxy.

Steps to Reproduce

Upgrade to collector-contrib 0.101.0 and set the following env vars:
"no_proxy_"="<elastic_url>",
"http_proxy"="<proxy_url>",
"https_proxy"="<proxy_url>"

Expected Result

Send traces to elastic without the proxy, because it is set in the env var no_proxy

Actual Result

Traces are attempted to be sent through the proxy

Collector version

0.101.0

Environment information

opentelemetry-collector-contrib image

OpenTelemetry Collector configuration

receivers:
  otlp:
    protocols:
      http:
processors:
  batch:
    send_batch_max_size: 1
    send_batch_size: 1
    timeout: 10s
exporters:
  logging:
    verbosity: detailed
    sampling_initial: 5
    sampling_thereafter: 200
  otlp/elastic:
    endpoint: "<elastic_url>"
    tls:
      insecure: true
extensions:
  health_check: {}
service:
  extensions:
    - health_check
  pipelines:
    metrics:
      receivers: [otlp]
      exporters: [otlp/elastic]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/elastic, logging]

Log output

No response

Additional context

No response

@diecgia diecgia added the bug Something isn't working label Jun 21, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dmitryax dmitryax transferred this issue from open-telemetry/opentelemetry-collector-contrib Jun 22, 2024
@andrzej-stencel
Copy link
Member

I have updated the issue to reference the OTLP exporter instead of the Contrib's Elasticsearch exporter.

@andrzej-stencel
Copy link
Member

I am not able to reproduce, but I'm not sure if my testing is correct. Here is the config I used and the steps I followed:

exporters:
  logging:
  otlp:
    endpoint: https://200.100.1.1:4327
    retry_on_failure:
      enabled: false
    sending_queue:
      enabled: false
    tls:
      insecure: true
receivers:
  otlp:
    protocols:
      grpc:
service:
  pipelines:
    logs:
      exporters:
        - logging
        - otlp
      receivers:
        - otlp
  1. Run Otelcol v0.81.0 with proxy and observe collector logs:
$ HTTP_PROXY=https://127.0.0.1:1234 HTTPS_PROXY=https://127.0.0.1:1235 otelcol-contrib-0.81.0-linux_amd64 --config 0624-no-proxy.yaml 
2024-06-24T16:26:31.971+0200    info    service/telemetry.go:81 Setting up own telemetry...
2024-06-24T16:26:31.971+0200    info    service/telemetry.go:104        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
2024-06-24T16:26:31.971+0200    info    [email protected]/exporter.go:275        Development component. May change in the future.        {"kind": "exporter", "data_type": "logs", "name": "logging"}
2024-06-24T16:26:31.975+0200    info    service/service.go:131  Starting otelcol-contrib...     {"Version": "0.81.0", "NumCPU": 20}
2024-06-24T16:26:31.975+0200    info    extensions/extensions.go:30     Starting extensions...
2024-06-24T16:26:31.976+0200    warn    [email protected]/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks        {"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2024-06-24T16:26:31.976+0200    info    [email protected]/otlp.go:83 Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4317"}
2024-06-24T16:26:31.976+0200    info    service/service.go:148  Everything is ready. Begin running and processing data.
2024-06-24T16:26:31.977+0200    warn    zapgrpc/zapgrpc.go:195  [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "200.100.1.1:4327", ServerName: "200.100.1.1:4327", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:1235: connect: connection refused"     {"grpc_log": true}
2024-06-24T16:26:32.979+0200    warn    zapgrpc/zapgrpc.go:195  [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "200.100.1.1:4327", ServerName: "200.100.1.1:4327", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:1235: connect: connection refused"     {"grpc_log": true}
2024-06-24T16:26:34.529+0200    warn    zapgrpc/zapgrpc.go:195  [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "200.100.1.1:4327", ServerName: "200.100.1.1:4327", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:1235: connect: connection refused"     {"grpc_log": true}
^C2024-06-24T16:26:35.205+0200  info    otelcol/collector.go:236        Received signal from OS {"signal": "interrupt"}
2024-06-24T16:26:35.206+0200    info    service/service.go:157  Starting shutdown...
2024-06-24T16:26:35.207+0200    info    extensions/extensions.go:44     Stopping extensions...
2024-06-24T16:26:35.207+0200    info    service/service.go:171  Shutdown complete.

The log Error while dialing: dial tcp 127.0.0.1:1235: indicates that the collector is trying to use the proxy, which is expected.

  1. Run Otelcol v0.81.0 with NO_PROXY and observe collector logs:
$ HTTP_PROXY=https://127.0.0.1:1234 HTTPS_PROXY=https://127.0.0.1:1235 NO_PROXY=200.100.1.1 otelcol-contrib-0.81.0-linux_amd64 --config 0624-no-proxy.yaml
2024-06-24T16:27:22.740+0200    info    service/telemetry.go:81 Setting up own telemetry...
2024-06-24T16:27:22.740+0200    info    service/telemetry.go:104        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
2024-06-24T16:27:22.741+0200    info    [email protected]/exporter.go:275        Development component. May change in the future.        {"kind": "exporter", "data_type": "logs", "name": "logging"}
2024-06-24T16:27:22.745+0200    info    service/service.go:131  Starting otelcol-contrib...     {"Version": "0.81.0", "NumCPU": 20}
2024-06-24T16:27:22.745+0200    info    extensions/extensions.go:30     Starting extensions...
2024-06-24T16:27:22.746+0200    warn    [email protected]/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks        {"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2024-06-24T16:27:22.746+0200    info    [email protected]/otlp.go:83 Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4317"}
2024-06-24T16:27:22.746+0200    info    service/service.go:148  Everything is ready. Begin running and processing data.

2024-06-24T16:27:35.329+0200    error   exporterhelper/queued_retry.go:357      Exporting failed. Try enabling retry_on_failure config option to retry on retryable errors      {"kind": "exporter", "data_type": "logs", "name": "otlp", "error": "rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
        go.opentelemetry.io/collector/[email protected]/exporterhelper/queued_retry.go:357
go.opentelemetry.io/collector/exporter/exporterhelper.(*logsExporterWithObservability).send
        go.opentelemetry.io/collector/[email protected]/exporterhelper/logs.go:124
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send
        go.opentelemetry.io/collector/[email protected]/exporterhelper/queued_retry.go:291
go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2
        go.opentelemetry.io/collector/[email protected]/exporterhelper/logs.go:104
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs
        go.opentelemetry.io/collector/[email protected]/logs.go:25
go.opentelemetry.io/collector/internal/fanoutconsumer.(*logsConsumer).ConsumeLogs
        go.opentelemetry.io/[email protected]/internal/fanoutconsumer/logs.go:71
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs
        go.opentelemetry.io/collector/[email protected]/logs.go:25
go.opentelemetry.io/collector/receiver/otlpreceiver/internal/logs.(*Receiver).Export
        go.opentelemetry.io/collector/receiver/[email protected]/internal/logs/otlp.go:40
go.opentelemetry.io/collector/pdata/plog/plogotlp.rawLogsServer.Export
        go.opentelemetry.io/collector/[email protected]/plog/plogotlp/grpc.go:82
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler.func1
        go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/logs/v1/logs_service.pb.go:311
go.opentelemetry.io/collector/config/configgrpc.enhanceWithClientInformation.func1
        go.opentelemetry.io/collector/config/[email protected]/configgrpc.go:400
google.golang.org/grpc.getChainUnaryHandler.func1
        google.golang.org/[email protected]/server.go:1156
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1
        go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:344
google.golang.org/grpc.chainUnaryInterceptors.func1
        google.golang.org/[email protected]/server.go:1147
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler
        go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/logs/v1/logs_service.pb.go:313
google.golang.org/grpc.(*Server).processUnaryRPC
        google.golang.org/[email protected]/server.go:1337
google.golang.org/grpc.(*Server).handleStream
        google.golang.org/[email protected]/server.go:1714
google.golang.org/grpc.(*Server).serveStreams.func1.1
        google.golang.org/[email protected]/server.go:959
2024-06-24T16:27:35.329+0200    error   exporterhelper/queued_retry.go:293      Exporting failed. Dropping data. Try enabling sending_queue to survive temporary failures.      {"kind": "exporter", "data_type": "logs", "name": "otlp", "dropped_items": 1}
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send
        go.opentelemetry.io/collector/[email protected]/exporterhelper/queued_retry.go:293
go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2
        go.opentelemetry.io/collector/[email protected]/exporterhelper/logs.go:104
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs
        go.opentelemetry.io/collector/[email protected]/logs.go:25
go.opentelemetry.io/collector/internal/fanoutconsumer.(*logsConsumer).ConsumeLogs
        go.opentelemetry.io/[email protected]/internal/fanoutconsumer/logs.go:71
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs
        go.opentelemetry.io/collector/[email protected]/logs.go:25
go.opentelemetry.io/collector/receiver/otlpreceiver/internal/logs.(*Receiver).Export
        go.opentelemetry.io/collector/receiver/[email protected]/internal/logs/otlp.go:40
go.opentelemetry.io/collector/pdata/plog/plogotlp.rawLogsServer.Export
        go.opentelemetry.io/collector/[email protected]/plog/plogotlp/grpc.go:82
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler.func1
        go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/logs/v1/logs_service.pb.go:311
go.opentelemetry.io/collector/config/configgrpc.enhanceWithClientInformation.func1
        go.opentelemetry.io/collector/config/[email protected]/configgrpc.go:400
google.golang.org/grpc.getChainUnaryHandler.func1
        google.golang.org/[email protected]/server.go:1156
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1
        go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]/interceptor.go:344
google.golang.org/grpc.chainUnaryInterceptors.func1
        google.golang.org/[email protected]/server.go:1147
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler
        go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/logs/v1/logs_service.pb.go:313
google.golang.org/grpc.(*Server).processUnaryRPC
        google.golang.org/[email protected]/server.go:1337
google.golang.org/grpc.(*Server).handleStream
        google.golang.org/[email protected]/server.go:1714
google.golang.org/grpc.(*Server).serveStreams.func1.1
        google.golang.org/[email protected]/server.go:959
2024-06-24T16:27:35.329+0200    info    LogsExporter    {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
^C2024-06-24T16:27:37.279+0200  info    otelcol/collector.go:236        Received signal from OS {"signal": "interrupt"}
2024-06-24T16:27:37.279+0200    info    service/service.go:157  Starting shutdown...
2024-06-24T16:27:37.280+0200    info    extensions/extensions.go:44     Stopping extensions...
2024-06-24T16:27:37.280+0200    info    service/service.go:171  Shutdown complete.

There is no mention of either the proxy IP or the original destination IP, so I'm not sure, but I the fact that there's no mention of the proxy IP as in the previous run makes me assume that the collector is trying to reach the original destination IP 200.100.1.1 and getting "error": "rpc error: code = DeadlineExceeded desc = context deadline exceeded, which is expected.

  1. Run Otelcol v0.101.0 with proxy and observe collector logs:
$ HTTP_PROXY=https://127.0.0.1:1234 HTTPS_PROXY=https://127.0.0.1:1235 otelcol-contrib-0.101.0-linux_amd64 --config 0624-no-proxy.yaml                   
2024-06-24T16:31:39.064+0200    info    [email protected]/service.go:102 Setting up own telemetry...
2024-06-24T16:31:39.064+0200    info    [email protected]/telemetry.go:103       Serving metrics {"address": ":8888", "level": "Normal"}
2024-06-24T16:31:39.065+0200    info    [email protected]/exporter.go:275       Deprecated component. Will be removed in future releases.       {"kind": "exporter", "data_type": "logs", "name": "logging"}
2024-06-24T16:31:39.071+0200    info    [email protected]/service.go:169 Starting otelcol-contrib...     {"Version": "0.101.0", "NumCPU": 20}
2024-06-24T16:31:39.071+0200    info    extensions/extensions.go:34     Starting extensions...
2024-06-24T16:31:39.072+0200    warn    [email protected]/warning.go:42 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. Enable the feature gate to change the default and remove this warning.     {"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks", "feature gate ID": "component.UseLocalHostAsDefaultHost"}
2024-06-24T16:31:39.072+0200    info    [email protected]/otlp.go:102       Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4317"}
2024-06-24T16:31:39.072+0200    info    [email protected]/service.go:195 Everything is ready. Begin running and processing data.
2024-06-24T16:31:39.072+0200    warn    localhostgate/featuregate.go:63 The default endpoints for all servers in components will change to use localhost instead of 0.0.0.0 in a future version. Use the feature gate to preview the new default.   {"feature gate ID": "component.UseLocalHostAsDefaultHost"}

2024-06-24T16:31:43.036+0200    info    LogsExporter    {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
2024-06-24T16:31:43.038+0200    warn    zapgrpc/zapgrpc.go:193  [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "200.100.1.1:4327", ServerName: "200.100.1.1:4327", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:1235: connect: connection refused"      {"grpc_log": true}
2024-06-24T16:31:43.038+0200    error   exporterhelper/common.go:296    Exporting failed. Rejecting data. Try enabling retry_on_failure config option to retry on retryable errors. Try enabling sending_queue to survive temporary failures.   {"kind": "exporter", "data_type": "logs", "name": "otlp", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:1235: connect: connection refused\"", "rejected_items": 1}
go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send
        go.opentelemetry.io/collector/[email protected]/exporterhelper/common.go:296
go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsRequestExporter.func1
        go.opentelemetry.io/collector/[email protected]/exporterhelper/logs.go:134
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs
        go.opentelemetry.io/collector/[email protected]/logs.go:25
go.opentelemetry.io/collector/internal/fanoutconsumer.(*logsConsumer).ConsumeLogs
        go.opentelemetry.io/[email protected]/internal/fanoutconsumer/logs.go:73
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs
        go.opentelemetry.io/collector/[email protected]/logs.go:25
go.opentelemetry.io/collector/receiver/otlpreceiver/internal/logs.(*Receiver).Export
        go.opentelemetry.io/collector/receiver/[email protected]/internal/logs/otlp.go:41
go.opentelemetry.io/collector/pdata/plog/plogotlp.rawLogsServer.Export
        go.opentelemetry.io/collector/[email protected]/plog/plogotlp/grpc.go:88
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler.func1
        go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/logs/v1/logs_service.pb.go:311
go.opentelemetry.io/collector/config/configgrpc.(*ServerConfig).toServerOption.enhanceWithClientInformation.func9
        go.opentelemetry.io/collector/config/[email protected]/configgrpc.go:439
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler
        go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/logs/v1/logs_service.pb.go:313
google.golang.org/grpc.(*Server).processUnaryRPC
        google.golang.org/[email protected]/server.go:1379
google.golang.org/grpc.(*Server).handleStream
        google.golang.org/[email protected]/server.go:1790
google.golang.org/grpc.(*Server).serveStreams.func2.1
        google.golang.org/[email protected]/server.go:1029
2024-06-24T16:31:44.039+0200    warn    zapgrpc/zapgrpc.go:193  [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "200.100.1.1:4327", ServerName: "200.100.1.1:4327", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:1235: connect: connection refused"      {"grpc_log": true}
2024-06-24T16:31:45.566+0200    warn    zapgrpc/zapgrpc.go:193  [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "200.100.1.1:4327", ServerName: "200.100.1.1:4327", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:1235: connect: connection refused"      {"grpc_log": true}
^C2024-06-24T16:31:45.982+0200  info    [email protected]/collector.go:333       Received signal from OS {"signal": "interrupt"}
2024-06-24T16:31:45.982+0200    info    [email protected]/service.go:232 Starting shutdown...
2024-06-24T16:31:45.983+0200    info    extensions/extensions.go:59     Stopping extensions...
2024-06-24T16:31:45.983+0200    info    [email protected]/service.go:246 Shutdown complete.

Judging by the log Error while dialing: dial tcp 127.0.0.1:1235:, the collector is trying to reach the proxy, which is expected.

  1. Run Otelcol v0.101.0 with NO_PROXY and observe collector logs:
$ HTTP_PROXY=https://127.0.0.1:1234 HTTPS_PROXY=https://127.0.0.1:1235 NO_PROXY=200.100.1.1 otelcol-contrib-0.101.0-linux_amd64 --config 0624-no-proxy.yaml
2024-06-24T16:33:10.984+0200    info    [email protected]/service.go:102 Setting up own telemetry...
2024-06-24T16:33:10.985+0200    info    [email protected]/telemetry.go:103       Serving metrics {"address": ":8888", "level": "Normal"}
2024-06-24T16:33:10.985+0200    info    [email protected]/exporter.go:275       Deprecated component. Will be removed in future releases.       {"kind": "exporter", "data_type": "logs", "name": "logging"}
2024-06-24T16:33:10.990+0200    info    [email protected]/service.go:169 Starting otelcol-contrib...     {"Version": "0.101.0", "NumCPU": 20}
2024-06-24T16:33:10.990+0200    info    extensions/extensions.go:34     Starting extensions...
2024-06-24T16:33:10.991+0200    warn    [email protected]/warning.go:42 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. Enable the feature gate to change the default and remove this warning.     {"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks", "feature gate ID": "component.UseLocalHostAsDefaultHost"}
2024-06-24T16:33:10.991+0200    info    [email protected]/otlp.go:102       Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4317"}
2024-06-24T16:33:10.991+0200    info    [email protected]/service.go:195 Everything is ready. Begin running and processing data.
2024-06-24T16:33:10.991+0200    warn    localhostgate/featuregate.go:63 The default endpoints for all servers in components will change to use localhost instead of 0.0.0.0 in a future version. Use the feature gate to preview the new default.   {"feature gate ID": "component.UseLocalHostAsDefaultHost"}

2024-06-24T16:33:13.550+0200    info    LogsExporter    {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}

2024-06-24T16:33:18.552+0200    error   exporterhelper/common.go:296    Exporting failed. Rejecting data. Try enabling retry_on_failure config option to retry on retryable errors. Try enabling sending_queue to survive temporary failures.   {"kind": "exporter", "data_type": "logs", "name": "otlp", "error": "rpc error: code = DeadlineExceeded desc = received context error while waiting for new LB policy update: context deadline exceeded", "rejected_items": 1}
go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send
        go.opentelemetry.io/collector/[email protected]/exporterhelper/common.go:296
go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsRequestExporter.func1
        go.opentelemetry.io/collector/[email protected]/exporterhelper/logs.go:134
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs
        go.opentelemetry.io/collector/[email protected]/logs.go:25
go.opentelemetry.io/collector/internal/fanoutconsumer.(*logsConsumer).ConsumeLogs
        go.opentelemetry.io/[email protected]/internal/fanoutconsumer/logs.go:73
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs
        go.opentelemetry.io/collector/[email protected]/logs.go:25
go.opentelemetry.io/collector/receiver/otlpreceiver/internal/logs.(*Receiver).Export
        go.opentelemetry.io/collector/receiver/[email protected]/internal/logs/otlp.go:41
go.opentelemetry.io/collector/pdata/plog/plogotlp.rawLogsServer.Export
        go.opentelemetry.io/collector/[email protected]/plog/plogotlp/grpc.go:88
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler.func1
        go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/logs/v1/logs_service.pb.go:311
go.opentelemetry.io/collector/config/configgrpc.(*ServerConfig).toServerOption.enhanceWithClientInformation.func9
        go.opentelemetry.io/collector/config/[email protected]/configgrpc.go:439
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler
        go.opentelemetry.io/collector/[email protected]/internal/data/protogen/collector/logs/v1/logs_service.pb.go:313
google.golang.org/grpc.(*Server).processUnaryRPC
        google.golang.org/[email protected]/server.go:1379
google.golang.org/grpc.(*Server).handleStream
        google.golang.org/[email protected]/server.go:1790
google.golang.org/grpc.(*Server).serveStreams.func2.1
        google.golang.org/[email protected]/server.go:1029

^C2024-06-24T16:33:23.380+0200  info    [email protected]/collector.go:333       Received signal from OS {"signal": "interrupt"}
2024-06-24T16:33:23.381+0200    info    [email protected]/service.go:232 Starting shutdown...
2024-06-24T16:33:23.382+0200    info    extensions/extensions.go:59     Stopping extensions...
2024-06-24T16:33:23.382+0200    info    [email protected]/service.go:246 Shutdown complete.
2024-06-24T16:33:23.382+0200    warn    zapgrpc/zapgrpc.go:193  [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "200.100.1.1:4327", ServerName: "200.100.1.1:4327", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 200.100.1.1:4327: operation was canceled" {"grpc_log": true}

The error Error while dialing: dial tcp 200.100.1.1:4327 indicates that the collector is trying to reach the destination address directly, which is expected.

@diecgia
Copy link
Author

diecgia commented Jun 25, 2024

Thanks for your reply. I don't know why in my case it tries to send traces through the proxy. Finally, I unset the proxy variables and it sends traces to elastic without errors.

@andrzej-stencel
Copy link
Member

@diecgia I'll close the issue if that's OK. Please reopen (or comment on it and mention me if you cannot reopen) if you can find a reliable reproduction scenario for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/otlp
Projects
None yet
Development

No branches or pull requests

2 participants