Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak problem with Opentelemetry Collector #29762

Closed
akiyama-naoki23-fixer opened this issue Dec 11, 2023 · 21 comments
Closed

Memory leak problem with Opentelemetry Collector #29762

akiyama-naoki23-fixer opened this issue Dec 11, 2023 · 21 comments
Labels
bug Something isn't working processor/tailsampling Tail sampling processor

Comments

@akiyama-naoki23-fixer
Copy link

akiyama-naoki23-fixer commented Dec 11, 2023

Describe the bug
Memory leak problem with Opentelemetry Collecotor

Steps to reproduce
I wasn't able to reproduce this locally, but I think it may be due to the fact that OTLP collected a huge trace with 20000 spans.

What did you expect to see?
Expected memory usage to go up and down. However, memory usage is constantly going up.

What version did you use?
opentelemetry-operator:0.37.1
tempo-distributed:1.5.4

What config did you use?

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
  namespace: opentelemetry
spec:
  config: |
    connectors:
      spanmetrics:
        namespace: span.metrics

    receivers:
      # Data sources: traces, metrics, logs
      otlp:
        protocols:
          grpc:
          http:
    processors:
      memory_limiter:
        check_interval: 1s
        limit_percentage: 75
        spike_limit_percentage: 15
      batch:
        send_batch_size: 10000
        timeout: 10s
      tail_sampling:
        policies:
          - name: drop_noisy_traces_url
            type: string_attribute
            string_attribute:
              key: http.target
              values:
                - \/health
              enabled_regex_matching: true
              invert_match: true
    exporters:
      otlp:
        endpoint: http:https://tempo-distributor:4317/
        tls:
          insecure: true
      logging:
        loglevel: debug
      prometheus:
        enable_open_metrics: true
        endpoint: 0.0.0.0:8889
        resource_to_telemetry_conversion:
          enabled: true
      loki:
        endpoint: http:https://loki-gateway.loki/loki/api/v1/push
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch, tail_sampling]
          exporters: [otlp, spanmetrics]
        metrics:
          receivers: [otlp, spanmetrics]
          processors: [memory_limiter, batch]
          exporters: [prometheus]
        logs:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [loki]

Environment
OS: AKS Ubuntu Linux
Compiler: .NET 6.0 dotnet-autoinstrumentation

@akiyama-naoki23-fixer akiyama-naoki23-fixer added the bug Something isn't working label Dec 11, 2023
@atoulme atoulme changed the title Memory leak problem with Opentelemetry Collecotor Memory leak problem with Opentelemetry Collector Dec 11, 2023
@atoulme
Copy link
Contributor

atoulme commented Dec 11, 2023

Are you running with some memory limit in place? It's likely you need to use GOMEMLIMIT or equivalent, and you can also run the pprof extension to capture memory usage.

It look like this bug is for the tailsampling processor, is that correct?

@akiyama-naoki23-fixer
Copy link
Author

@atoulme
I have not included a memory limit for the opentelemetry collector pod, but I have included a memory limit for tailsampling as written in the issue.

@mx-psi
Copy link
Member

mx-psi commented Dec 11, 2023

@akiyama-naoki23-fixer can you provide us with a profile of your running Collector? You can use the pprof extension as @atoulme mentioned

@akiyama-naoki23-fixer
Copy link
Author

@mx-psi @atoulme
It's not from pprof, but here is the memory usage graph from Prometheus.
image (8)

@atoulme
Copy link
Contributor

atoulme commented Dec 12, 2023

What is the memory available on the pod?

@akiyama-naoki23-fixer
Copy link
Author

I have not set a memory limit on the Otel collector pod, so I guess it depends on the Capacity of the Node, but it is more than 30 GiB.

@albertteoh
Copy link
Contributor

I noticed you're using the spanmetrics connector; there was a recent merge of a memory leak fix: #28847

It was just released today: https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.91.0

It might be worth upgrading the opentelemetry-operator once it's released with collector v0.91.0.

@mx-psi mx-psi transferred this issue from open-telemetry/opentelemetry-collector Dec 12, 2023
@mx-psi
Copy link
Member

mx-psi commented Dec 12, 2023

I am transferring this to contrib since the current theory is that this is related to the spanmetrics connector

@akiyama-naoki23-fixer
Copy link
Author

Thank you very much.
I will wait for the release of v0.91.0. of opentelemetry-opeartor.

@nifrasinnovent
Copy link

@albertteoh I have updated the spanmetrics connector. But still having data refused due to high memory usage.

2023-12-12T10:28:30.857Z error [email protected]/connector.go:235 Failed ConsumeMetrics {"kind": "connector", "name": "spanmetrics", "exporter_in_pipeline": "traces", "receiver_in_pipeline": "metrics", "error": "data refused due to high memory usage"} github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).exportMetrics github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:235 github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector.(*connectorImp).Start.func1 github.com/open-telemetry/opentelemetry-collector-contrib/connector/[email protected]/connector.go:189

@albertteoh
Copy link
Contributor

@albertteoh I have updated the spanmetrics connector. But still having data refused due to high memory usage.

@nifrasinnovent could you share your OTEL config please?

@nifrasinnovent
Copy link

nifrasinnovent commented Dec 12, 2023

@albertteoh I have updated the spanmetrics connector. But still having data refused due to high memory usage.

@nifrasinnovent could you share your OTEL config please?

`
    nodeSelector:
    eks.amazonaws.com/nodegroup: Otlp-Worker

    nameOverride: "otlp-collector"
    fullnameOverride: "otlp-collector"
    mode: deployment

    image:
      repository: 807695269339.dkr.ecr.me-south-1.amazonaws.com/otlp-collector
      pullPolicy: Always
      tag: latest

    configMap:
      create: true

    command:
      name: /app/otlp-collector

    resources:
      limits:
      cpu: 125m
      memory: 500Mi
      requests:
      cpu: 125m
      memory: 100Mi

    ports:
      dp-metrics:        
        enabled: true
        containerPort: 8889
        servicePort: 8889
        protocol: TCP        

    config:
      receivers:
        jaeger: null
        zipkin: null
        prometheus: null 
        otlp:
          protocols:
            http:
              endpoint: 0.0.0.0:4318
            grpc:
              endpoint: 0.0.0.0:4317  

      exporters:
        logging: null      
        debug: {}
        otlphttp:
          endpoint: http:https://jaeger-internal-collector:4318
        otlp:
          endpoint: jaeger-internal-collector:4317
          compression: gzip 
          tls:
            insecure: true        
        prometheus:
          endpoint: 0.0.0.0:8889

      processors:
        memory_limiter:
          check_interval: 1s
          limit_percentage: 80
          spike_limit_percentage: 25

        batch: {}

      connectors:
        spanmetrics:
          histogram:
            explicit:
              buckets: [100us, 1ms, 2ms, 6ms, 10ms, 100ms, 250ms]
          dimensions:
            - name: http.method
            - name: http.status_code
          exemplars:
            enabled: true
          exclude_dimensions: []
          dimensions_cache_size: 1000
          aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"    
          metrics_flush_interval: 15s       

      service:    
        telemetry:
          logs:
            level: "debug"       
        pipelines:
          traces:
            receivers:
              - otlp
            processors:    
              - memory_limiter
              - batch  
            exporters:
              - debug
              - otlp
              - spanmetrics
          metrics/spanmetrics:
            receivers:
              - spanmetrics
            processors:
              - memory_limiter
              - batch  
            exporters:
              - prometheus
              - debug
          metrics: null      
          logs: null

    useGOMEMLIMIT: true
`

@albertteoh
Copy link
Contributor

Thanks!

There's also a known issue where exemplars were observed to use a large amount of memory and a configurable limit on exemplars was added in this PR: #29242 (not merged yet).

As an experiment to narrow down the root cause, perhaps you could try temporarily setting exemplars.enabled=false to see if that resolves the issue you're seeing?

@nifrasinnovent
Copy link

let me try that

@nifrasinnovent
Copy link

nifrasinnovent commented Dec 13, 2023

@albertteoh yes, it was related to exemplars. OTLP pod did not crash due to memory since 17 hours.
Thanks

@HaroonSaid
Copy link

We believe 0.91.0 still leaks. We are back to running a non-contrib distribution to reduce our risk of memory leaks.

Copy link
Contributor

github-actions bot commented Mar 5, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

@github-actions github-actions bot added the Stale label Mar 5, 2024
@jaysonsantos
Copy link

Hey folks, this is also something I am seen and it happens at random times but usually after a few days.
I have a setup with a fixed number of collectors that are the samplers and tred boh with memorylimiter and GOMEMLIMIT and at some point the process will either stop accepting new spans or the CPU goes up a lot because the gc is constantly trying to free the memory.
The config being used is the following:

exporters:
  logging:
    verbosity: basic
  otlp/newrelic:
    compression: gzip
    endpoint: endpoint:4317
    headers:
      api-key: token
extensions:
  health_check: null
  pprof:
    endpoint: 0.0.0.0:1777
  zpages: null
processors:
  batch:
    send_batch_size: 10000
    timeout: 10s
  batch/sampled:
    send_batch_size: 10000
    timeout: 10s
  filter/newrelic_and_otel:
    error_mode: ignore
    traces:
      span:
        - name == "TokenLinkingSubscriber.withNRToken"
  memory_limiter:
    check_interval: 5s
    limit_mib: 3800
    spike_limit_mib: 1000
  resourcedetection/system:
    detectors:
      - env
      - system
    override: false
    timeout: 2s
  tail_sampling:
    decision_wait: 60s
    expected_new_traces_per_sec: 10000
    num_traces: 50000000
    policies:
      - name: always_sample_error
        status_code:
          status_codes:
            - ERROR
        type: status_code
      - and:
          and_sub_policy:
            - name: routes
              string_attribute:
                enabled_regex_matching: true
                key: http.route
                values:
                  - /health
                  - /(actuator|sys)/health
              type: string_attribute
            - name: probabilistic-policy
              probabilistic:
                sampling_percentage: 0.1
              type: probabilistic
        name: health_endpoints
        type: and
      - name: sample_10_percent
        probabilistic:
          sampling_percentage: 10
        type: probabilistic
      - latency:
          threshold_ms: 3000
        name: slow-requests
        type: latency
receivers:
  otlp:
    protocols:
      grpc: null
      http: null
service:
  extensions:
    - zpages
    - health_check
    - pprof
  pipelines:
    logs/1:
      exporters:
        - otlp/newrelic
      processors:
        - resourcedetection/system
        - batch
      receivers:
        - otlp
    metrics/1:
      exporters:
        - otlp/newrelic
        - logging
      processors:
        - resourcedetection/system
        - batch
      receivers:
        - otlp
    traces/1:
      exporters:
        - otlp/newrelic
        - logging
      processors:
        - filter/newrelic_and_otel
        - resourcedetection/system
        - tail_sampling
        - batch/sampled
      receivers:
        - otlp
  telemetry:
    metrics:
      address: 0.0.0.0:8888
Fetching profile over HTTP from http:https://localhost:1777/debug/pprof/heap
Saved profile in /Users/jayson.reis/pprof/pprof.otelcol-contrib.alloc_objects.alloc_space.inuse_objects.inuse_space.016.pb.gz
File: otelcol-contrib
Type: inuse_space
Time: Apr 2, 2024 at 5:15pm (CEST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) tree
Showing nodes accounting for 3092.52MB, 97.48% of 3172.59MB total
Dropped 214 nodes (cum <= 15.86MB)
----------------------------------------------------------+-------------
      flat  flat%   sum%        cum   cum%   calls calls% + context
----------------------------------------------------------+-------------
                                          763.45MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.createTracesProcessor
  762.95MB 24.05% 24.05%   763.45MB 24.06%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.newTracesProcessor
----------------------------------------------------------+-------------
                                         2024.57MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).ConsumeTraces
  753.56MB 23.75% 47.80%  2024.57MB 63.81%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).processTraces
                                         1267.50MB 62.61% |   sync.(*Map).LoadOrStore
----------------------------------------------------------+-------------
                                             663MB   100% |   sync.(*Map).LoadOrStore
     663MB 20.90% 68.70%      663MB 20.90%                | sync.(*Map).dirtyLocked
----------------------------------------------------------+-------------
                                         1267.50MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).processTraces
  557.50MB 17.57% 86.27%  1267.50MB 39.95%                | sync.(*Map).LoadOrStore
                                             663MB 52.31% |   sync.(*Map).dirtyLocked
                                              47MB  3.71% |   sync.newEntry (inline)
----------------------------------------------------------+-------------
                                             200MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).samplingPolicyOnTick (inline)
  144.50MB  4.55% 90.83%      200MB  6.30%                | go.opentelemetry.io/collector/pdata/ptrace.NewTraces
                                           55.50MB 27.75% |   go.opentelemetry.io/collector/pdata/ptrace.newTraces (inline)
----------------------------------------------------------+-------------
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/pkg/ottl/contexts/ottlspan.NewTransformContext (inline)
      63MB  1.99% 92.81%       63MB  1.99%                | go.opentelemetry.io/collector/pdata/pcommon.NewMap
----------------------------------------------------------+-------------
                                           55.50MB   100% |   go.opentelemetry.io/collector/pdata/ptrace.NewTraces (inline)
   55.50MB  1.75% 94.56%    55.50MB  1.75%                | go.opentelemetry.io/collector/pdata/ptrace.newTraces
----------------------------------------------------------+-------------
                                              47MB   100% |   sync.(*Map).LoadOrStore (inline)
      47MB  1.48% 96.04%       47MB  1.48%                | sync.newEntry
----------------------------------------------------------+-------------
                                              37MB 82.22% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.init.Upsert.func2
                                               8MB 17.78% |   go.opencensus.io/tag.(*mutator).Mutate
      45MB  1.42% 97.46%       45MB  1.42%                | go.opencensus.io/tag.createMetadatas
----------------------------------------------------------+-------------
                                              46MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).samplingPolicyOnTick
    0.50MB 0.016% 97.48%       46MB  1.45%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).makeDecision
                                              45MB 97.83% |   go.opencensus.io/stats.RecordWithTags
----------------------------------------------------------+-------------
                                          252.99MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/timeutils.(*PolicyTicker).Start.func1 (inline)
         0     0% 97.48%   252.99MB  7.97%                | github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/timeutils.(*PolicyTicker).OnTick
                                          252.99MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).samplingPolicyOnTick
----------------------------------------------------------+-------------
         0     0% 97.48%   252.99MB  7.97%                | github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/timeutils.(*PolicyTicker).Start.func1
                                          252.99MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/timeutils.(*PolicyTicker).OnTick (inline)
----------------------------------------------------------+-------------
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1.1.1 (inline)
         0     0% 97.48%       63MB  1.99%                | github.com/open-telemetry/opentelemetry-collector-contrib/pkg/ottl/contexts/ottlspan.NewTransformContext
                                              63MB   100% |   go.opentelemetry.io/collector/pdata/pcommon.NewMap (inline)
----------------------------------------------------------+-------------
                                              63MB   100% |   go.opentelemetry.io/collector/processor/processorhelper.NewTracesProcessor.func1
         0     0% 97.48%       63MB  1.99%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces
                                              63MB   100% |   go.opentelemetry.io/collector/pdata/ptrace.ResourceSpansSlice.RemoveIf
----------------------------------------------------------+-------------
                                              63MB   100% |   go.opentelemetry.io/collector/pdata/ptrace.ResourceSpansSlice.RemoveIf
         0     0% 97.48%       63MB  1.99%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1
                                              63MB   100% |   go.opentelemetry.io/collector/pdata/ptrace.ScopeSpansSlice.RemoveIf
----------------------------------------------------------+-------------
                                              63MB   100% |   go.opentelemetry.io/collector/pdata/ptrace.ScopeSpansSlice.RemoveIf
         0     0% 97.48%       63MB  1.99%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1.1
                                              63MB   100% |   go.opentelemetry.io/collector/pdata/ptrace.SpanSlice.RemoveIf
----------------------------------------------------------+-------------
                                              63MB   100% |   go.opentelemetry.io/collector/pdata/ptrace.SpanSlice.RemoveIf
         0     0% 97.48%       63MB  1.99%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1.1.1
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/pkg/ottl/contexts/ottlspan.NewTransformContext (inline)
----------------------------------------------------------+-------------
                                         2024.57MB   100% |   go.opentelemetry.io/collector/processor/processorhelper.NewTracesProcessor.func1
         0     0% 97.48%  2024.57MB 63.81%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).ConsumeTraces
                                         2024.57MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).processTraces
----------------------------------------------------------+-------------
                                          252.99MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/timeutils.(*PolicyTicker).OnTick
         0     0% 97.48%   252.99MB  7.97%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).samplingPolicyOnTick
                                             200MB 79.06% |   go.opentelemetry.io/collector/pdata/ptrace.NewTraces (inline)
                                              46MB 18.18% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).makeDecision
----------------------------------------------------------+-------------
                                          763.45MB   100% |   go.opentelemetry.io/collector/processor.CreateTracesFunc.CreateTracesProcessor
         0     0% 97.48%   763.45MB 24.06%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.createTracesProcessor
                                          763.45MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.newTracesProcessor
----------------------------------------------------------+-------------
                                              37MB   100% |   go.opencensus.io/tag.(*mutator).Mutate
         0     0% 97.48%       37MB  1.17%                | github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.init.Upsert.func2
                                              37MB   100% |   go.opencensus.io/tag.createMetadatas
----------------------------------------------------------+-------------
                                          764.95MB   100% |   main.runInteractive
         0     0% 97.48%   764.95MB 24.11%                | github.com/spf13/cobra.(*Command).Execute
                                          764.95MB   100% |   github.com/spf13/cobra.(*Command).ExecuteC
----------------------------------------------------------+-------------
                                          764.95MB   100% |   github.com/spf13/cobra.(*Command).Execute
         0     0% 97.48%   764.95MB 24.11%                | github.com/spf13/cobra.(*Command).ExecuteC
                                          764.95MB   100% |   github.com/spf13/cobra.(*Command).execute
----------------------------------------------------------+-------------
                                          764.95MB   100% |   github.com/spf13/cobra.(*Command).ExecuteC
         0     0% 97.48%   764.95MB 24.11%                | github.com/spf13/cobra.(*Command).execute
                                          764.95MB   100% |   go.opentelemetry.io/collector/otelcol.NewCommand.func1
----------------------------------------------------------+-------------
                                              45MB   100% |   go.opencensus.io/stats.RecordWithTags
         0     0% 97.48%       45MB  1.42%                | go.opencensus.io/stats.RecordWithOptions
                                              45MB   100% |   go.opencensus.io/tag.New
----------------------------------------------------------+-------------
                                              45MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).makeDecision
         0     0% 97.48%       45MB  1.42%                | go.opencensus.io/stats.RecordWithTags
                                              45MB   100% |   go.opencensus.io/stats.RecordWithOptions
----------------------------------------------------------+-------------
                                              45MB   100% |   go.opencensus.io/tag.New
         0     0% 97.48%       45MB  1.42%                | go.opencensus.io/tag.(*mutator).Mutate
                                              37MB 82.22% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.init.Upsert.func2
                                               8MB 17.78% |   go.opencensus.io/tag.createMetadatas
----------------------------------------------------------+-------------
                                              45MB   100% |   go.opencensus.io/stats.RecordWithOptions
         0     0% 97.48%       45MB  1.42%                | go.opencensus.io/tag.New
                                              45MB   100% |   go.opencensus.io/tag.(*mutator).Mutate
----------------------------------------------------------+-------------
                                         2088.07MB   100% |   go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler
         0     0% 97.48%  2088.07MB 65.82%                | go.opentelemetry.io/collector/config/configgrpc.(*GRPCServerSettings).toServerOption.enhanceWithClientInformation.func9
                                         2088.07MB   100% |   go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler.func1
----------------------------------------------------------+-------------
                                         2088.07MB   100% |   go.opentelemetry.io/collector/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces
                                         2025.07MB 96.98% |   go.opentelemetry.io/collector/processor/processorhelper.NewTracesProcessor.func1
         0     0% 97.48%  2088.07MB 65.82%                | go.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces
                                         2088.07MB   100% |   go.opentelemetry.io/collector/processor/processorhelper.NewTracesProcessor.func1
----------------------------------------------------------+-------------
                                         2088.07MB   100% |   go.opentelemetry.io/collector/receiver/otlpreceiver/internal/trace.(*Receiver).Export
         0     0% 97.48%  2088.07MB 65.82%                | go.opentelemetry.io/collector/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces
                                         2088.07MB   100% |   go.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces
----------------------------------------------------------+-------------
                                          764.95MB   100% |   go.opentelemetry.io/collector/otelcol.NewCommand.func1
         0     0% 97.48%   764.95MB 24.11%                | go.opentelemetry.io/collector/otelcol.(*Collector).Run
                                          764.95MB   100% |   go.opentelemetry.io/collector/otelcol.(*Collector).setupConfigurationComponents
----------------------------------------------------------+-------------
                                          764.95MB   100% |   go.opentelemetry.io/collector/otelcol.(*Collector).Run
         0     0% 97.48%   764.95MB 24.11%                | go.opentelemetry.io/collector/otelcol.(*Collector).setupConfigurationComponents
                                          764.95MB   100% |   go.opentelemetry.io/collector/service.New
----------------------------------------------------------+-------------
                                          764.95MB   100% |   github.com/spf13/cobra.(*Command).execute
         0     0% 97.48%   764.95MB 24.11%                | go.opentelemetry.io/collector/otelcol.NewCommand.func1
                                          764.95MB   100% |   go.opentelemetry.io/collector/otelcol.(*Collector).Run
----------------------------------------------------------+-------------
                                         2100.32MB   100% |   google.golang.org/grpc.(*Server).processUnaryRPC
         0     0% 97.48%  2100.32MB 66.20%                | go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler
                                         2088.07MB 99.42% |   go.opentelemetry.io/collector/config/configgrpc.(*GRPCServerSettings).toServerOption.enhanceWithClientInformation.func9
----------------------------------------------------------+-------------
                                         2088.07MB   100% |   go.opentelemetry.io/collector/config/configgrpc.(*GRPCServerSettings).toServerOption.enhanceWithClientInformation.func9
         0     0% 97.48%  2088.07MB 65.82%                | go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler.func1
                                         2088.07MB   100% |   go.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.rawTracesServer.Export
----------------------------------------------------------+-------------
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces
         0     0% 97.48%       63MB  1.99%                | go.opentelemetry.io/collector/pdata/ptrace.ResourceSpansSlice.RemoveIf
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1
----------------------------------------------------------+-------------
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1
         0     0% 97.48%       63MB  1.99%                | go.opentelemetry.io/collector/pdata/ptrace.ScopeSpansSlice.RemoveIf
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1.1
----------------------------------------------------------+-------------
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1.1
         0     0% 97.48%       63MB  1.99%                | go.opentelemetry.io/collector/pdata/ptrace.SpanSlice.RemoveIf
                                              63MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces.func1.1.1
----------------------------------------------------------+-------------
                                         2088.07MB   100% |   go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler.func1
         0     0% 97.48%  2088.07MB 65.82%                | go.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.rawTracesServer.Export
                                         2088.07MB   100% |   go.opentelemetry.io/collector/receiver/otlpreceiver/internal/trace.(*Receiver).Export
----------------------------------------------------------+-------------
                                          763.45MB   100% |   go.opentelemetry.io/collector/service/internal/graph.(*processorNode).buildComponent
         0     0% 97.48%   763.45MB 24.06%                | go.opentelemetry.io/collector/processor.(*Builder).CreateTraces
                                          763.45MB   100% |   go.opentelemetry.io/collector/processor.CreateTracesFunc.CreateTracesProcessor
----------------------------------------------------------+-------------
                                          763.45MB   100% |   go.opentelemetry.io/collector/processor.(*Builder).CreateTraces
         0     0% 97.48%   763.45MB 24.06%                | go.opentelemetry.io/collector/processor.CreateTracesFunc.CreateTracesProcessor
                                          763.45MB   100% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.createTracesProcessor
----------------------------------------------------------+-------------
                                         2088.07MB   100% |   go.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces
         0     0% 97.48%  2088.07MB 65.82%                | go.opentelemetry.io/collector/processor/processorhelper.NewTracesProcessor.func1
                                         2025.07MB 96.98% |   go.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces
                                         2024.57MB 96.96% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/tailsamplingprocessor.(*tailSamplingSpanProcessor).ConsumeTraces
                                              63MB  3.02% |   github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterSpanProcessor).processTraces
----------------------------------------------------------+-------------
                                         2088.07MB   100% |   go.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.rawTracesServer.Export
         0     0% 97.48%  2088.07MB 65.82%                | go.opentelemetry.io/collector/receiver/otlpreceiver/internal/trace.(*Receiver).Export
                                         2088.07MB   100% |   go.opentelemetry.io/collector/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces
----------------------------------------------------------+-------------
                                          764.95MB   100% |   go.opentelemetry.io/collector/service.New
         0     0% 97.48%   764.95MB 24.11%                | go.opentelemetry.io/collector/service.(*Service).initExtensionsAndPipeline
                                          764.95MB   100% |   go.opentelemetry.io/collector/service/internal/graph.Build
----------------------------------------------------------+-------------
                                          764.95MB   100% |   go.opentelemetry.io/collector/otelcol.(*Collector).setupConfigurationComponents
         0     0% 97.48%   764.95MB 24.11%                | go.opentelemetry.io/collector/service.New
                                          764.95MB   100% |   go.opentelemetry.io/collector/service.(*Service).initExtensionsAndPipeline
----------------------------------------------------------+-------------
                                          764.95MB   100% |   go.opentelemetry.io/collector/service/internal/graph.Build
         0     0% 97.48%   764.95MB 24.11%                | go.opentelemetry.io/collector/service/internal/graph.(*Graph).buildComponents
                                          763.45MB 99.80% |   go.opentelemetry.io/collector/service/internal/graph.(*processorNode).buildComponent
----------------------------------------------------------+-------------
                                          763.45MB   100% |   go.opentelemetry.io/collector/service/internal/graph.(*Graph).buildComponents
         0     0% 97.48%   763.45MB 24.06%                | go.opentelemetry.io/collector/service/internal/graph.(*processorNode).buildComponent
                                          763.45MB   100% |   go.opentelemetry.io/collector/processor.(*Builder).CreateTraces
----------------------------------------------------------+-------------
                                          764.95MB   100% |   go.opentelemetry.io/collector/service.(*Service).initExtensionsAndPipeline
         0     0% 97.48%   764.95MB 24.11%                | go.opentelemetry.io/collector/service/internal/graph.Build
                                          764.95MB   100% |   go.opentelemetry.io/collector/service/internal/graph.(*Graph).buildComponents
----------------------------------------------------------+-------------
                                         2102.36MB   100% |   google.golang.org/grpc.(*Server).serveStreams.func1.1
         0     0% 97.48%  2102.36MB 66.27%                | google.golang.org/grpc.(*Server).handleStream
                                         2102.36MB   100% |   google.golang.org/grpc.(*Server).processUnaryRPC
----------------------------------------------------------+-------------
                                         2102.36MB   100% |   google.golang.org/grpc.(*Server).handleStream
         0     0% 97.48%  2102.36MB 66.27%                | google.golang.org/grpc.(*Server).processUnaryRPC
                                         2100.32MB 99.90% |   go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler
----------------------------------------------------------+-------------
         0     0% 97.48%  2102.36MB 66.27%                | google.golang.org/grpc.(*Server).serveStreams.func1.1
                                         2102.36MB   100% |   google.golang.org/grpc.(*Server).handleStream
----------------------------------------------------------+-------------
                                          764.45MB 99.93% |   runtime.main
         0     0% 97.48%   764.95MB 24.11%                | main.main
                                          764.95MB   100% |   main.run (inline)
----------------------------------------------------------+-------------
                                          764.95MB   100% |   main.main (inline)
         0     0% 97.48%   764.95MB 24.11%                | main.run
                                          764.95MB   100% |   main.runInteractive
----------------------------------------------------------+-------------
                                          764.95MB   100% |   main.run
         0     0% 97.48%   764.95MB 24.11%                | main.runInteractive
                                          764.95MB   100% |   github.com/spf13/cobra.(*Command).Execute
----------------------------------------------------------+-------------
                                           36.95MB   100% |   runtime.main (inline)
         0     0% 97.48%    36.95MB  1.16%                | runtime.doInit
                                           36.95MB   100% |   runtime.doInit1
----------------------------------------------------------+-------------
                                           36.95MB   100% |   runtime.doInit
         0     0% 97.48%    36.95MB  1.16%                | runtime.doInit1
----------------------------------------------------------+-------------
         0     0% 97.48%   801.40MB 25.26%                | runtime.main
                                          764.45MB 95.39% |   main.main
                                           36.95MB  4.61% |   runtime.doInit (inline)
----------------------------------------------------------+-------------

@mx-psi mx-psi added the processor/tailsampling Tail sampling processor label Apr 2, 2024
Copy link
Contributor

github-actions bot commented Apr 2, 2024

Pinging code owners for processor/tailsampling: @jpkrohling. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot removed the Stale label Apr 3, 2024
@akiyama-naoki23-fixer
Copy link
Author

Thanks, everyone.
Unfortunately, the project i've been working was closed, so I'll leave it to you guys whether to close this issue.

@mx-psi
Copy link
Member

mx-psi commented Apr 18, 2024

@akiyama-naoki23-fixer Sorry to hear that, thank you for taking the time to report the issue and answer our questions in the first place. I am going to close this as wontfix since we won't be able to get more information about this specific case; if someone reading this finds themselves in a similar situation, please file a new issue, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working processor/tailsampling Tail sampling processor
Projects
None yet
Development

No branches or pull requests

7 participants