Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collector agent does not return spanmetrics #30762

Closed
P1ton opened this issue Jan 24, 2024 · 9 comments
Closed

Collector agent does not return spanmetrics #30762

P1ton opened this issue Jan 24, 2024 · 9 comments

Comments

@P1ton
Copy link

P1ton commented Jan 24, 2024

Component(s)

receiver/prometheus

Describe the issue you're reporting

Hello
I have a problem: my collector agents don’t give out spanmetrics.
I'm using helm deployment in daemonset mode
version: 0.92.0
config:

opentelemetry-collector:
  nameOverride: "otel"
  mode: "daemonset"
  resources:
    requests:
      cpu: 50m
      memory: 256Mi
    limits:
      cpu: 50m
      memory: 256Mi
  service:
    enabled: true
  ports:
    metrics:
      enabled: true
      containerPort: 8889
      servicePort: 8889
  config:
    receivers:
      jaeger:
        protocols:
          thrift_compact:
            endpoint: 0.0.0.0:6831
          grpc:
            endpoint: 0.0.0.0:14250
      otlp/spanmetrics:
        protocols:
          grpc:
            endpoint: "localhost:65535"
    exporters:
      prometheus:
        endpoint: "0.0.0.0:8889"
      otlp:
        endpoint: "jaeger-development-collector:4317"
        tls:
          insecure: true
    processors:
      batch:
      spanmetrics:
        metrics_exporter: prometheus
    service:
      pipelines:
        traces:
          receivers: [ jaeger ]
          processors: [ spanmetrics, batch ]
          exporters: [ otlp ]
        metrics/spanmetrics:
          receivers: [ otlp/spanmetrics ]
          exporters: [ prometheus ]
  podMonitor:
    enabled: true
    metricsEndpoints:
      - port: metrics
    extraLabels:
      release: prometheus-operator

Prometheus scrapes all pods :8889/metrics , but there are no metrics. I tried curl :8889/metrics - return 200 code and still no metrics.. In the agent logs:

2024-01-24T20:54:31.718Z	info	[email protected]/telemetry.go:86	Setting up own telemetry...
2024-01-24T20:54:31.719Z	info	[email protected]/telemetry.go:159	Serving metrics	{"address": "10.233.86.112:8888", "level": "Basic"}
2024-01-24T20:54:31.720Z	info	[email protected]/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-01-24T20:54:31.720Z	info	[email protected]/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "metrics", "name": "debug"}
2024-01-24T20:54:31.721Z	info	[email protected]/memorylimiter.go:118	Using percentage memory limiter	{"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "total_memory_mib": 256, "limit_percentage": 80, "spike_limit_percentage": 25}
2024-01-24T20:54:31.721Z	info	[email protected]/memorylimiter.go:82	Memory limiter configured	{"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "limit_mib": 204, "spike_limit_mib": 64, "check_interval": 5}
2024-01-24T20:54:31.721Z	info	[email protected]/processor.go:289	Deprecated component. Will be removed in future releases.	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces"}
2024-01-24T20:54:31.721Z	info	[email protected]/processor.go:128	Building spanmetrics	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces"}
2024-01-24T20:54:31.721Z	warn	[email protected]/factory.go:48	jaeger receiver will deprecate Thrift-gen and replace it with Proto-gen to be compatbible to jaeger 1.42.0 and higher. See https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/18485 for more details.	{"kind": "receiver", "name": "jaeger", "data_type": "traces"}
2024-01-24T20:54:31.817Z	info	[email protected]/service.go:151	Starting otelcol-contrib...	{"Version": "0.92.0", "NumCPU": 4}
2024-01-24T20:54:31.817Z	info	extensions/extensions.go:34	Starting extensions...
2024-01-24T20:54:31.817Z	info	extensions/extensions.go:37	Extension is starting...	{"kind": "extension", "name": "health_check"}
2024-01-24T20:54:31.817Z	info	[email protected]/healthcheckextension.go:35	Starting health_check extension	{"kind": "extension", "name": "health_check", "config": {"Endpoint":"10.233.86.112:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"ResponseHeaders":null,"Path":"/","ResponseBody":null,"CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}}
2024-01-24T20:54:31.818Z	info	extensions/extensions.go:52	Extension started.	{"kind": "extension", "name": "health_check"}
2024-01-24T20:54:31.818Z	info	[email protected]/processor.go:171	Starting spanmetricsprocessor	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces"}
2024-01-24T20:54:31.818Z	info	[email protected]/processor.go:191	Found exporter	{"kind": "processor", "name": "spanmetrics", "pipeline": "traces", "spanmetrics-exporter": "prometheus"}
2024-01-24T20:54:31.819Z	warn	[email protected]/warning.go:40	Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks	{"kind": "exporter", "data_type": "metrics", "name": "prometheus", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2024-01-24T20:54:31.819Z	warn	[email protected]/warning.go:40	Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks	{"kind": "receiver", "name": "jaeger", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2024-01-24T20:54:31.819Z	info	[email protected]/otlp.go:102	Starting GRPC server	{"kind": "receiver", "name": "otlp/spanmetrics", "data_type": "metrics", "endpoint": "localhost:65535"}
2024-01-24T20:54:31.917Z	info	[email protected]/otlp.go:102	Starting GRPC server	{"kind": "receiver", "name": "otlp", "data_type": "metrics", "endpoint": "10.233.86.112:4317"}
2024-01-24T20:54:31.917Z	info	[email protected]/otlp.go:152	Starting HTTP server	{"kind": "receiver", "name": "otlp", "data_type": "metrics", "endpoint": "10.233.86.112:4318"}
2024-01-24T20:54:31.917Z	info	[email protected]/metrics_receiver.go:231	Scrape job added	{"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "opentelemetry-collector"}
2024-01-24T20:54:31.917Z	info	[email protected]/metrics_receiver.go:240	Starting discovery manager	{"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2024-01-24T20:54:31.918Z	info	[email protected]/metrics_receiver.go:282	Starting scrape manager	{"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2024-01-24T20:54:31.917Z	info	healthcheck/handler.go:132	Health Check state change	{"kind": "extension", "name": "health_check", "status": "ready"}
2024-01-24T20:54:31.918Z	info	[email protected]/service.go:177	Everything is ready. Begin running and processing data.
2024-01-24T20:54:45.160Z	info	MetricsExporter	{"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 14, "data points": 14}
2024-01-24T20:54:55.186Z	info	MetricsExporter	{"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 26, "data points": 28}
2024-01-24T20:55:05.218Z	info	MetricsExporter	{"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 26, "data points": 28}
2024-01-24T20:55:15.248Z	info	MetricsExporter	{"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 26, "data points": 28}
2024-01-24T20:55:25.280Z	info	MetricsExporter	{"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 26, "data points": 28}

Traces are present in Jaeger UI, tell me what could be the problem?

@P1ton P1ton added the needs triage New item requiring triage label Jan 24, 2024
@github-actions github-actions bot added the receiver/prometheus Prometheus receiver label Jan 24, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dashpole
Copy link
Contributor

Can you share the collector config that is generated, rather than the helm config?

@P1ton
Copy link
Author

P1ton commented Jan 25, 2024

Can you share the collector config that is generated, rather than the helm config?

My configmap

  relay: |
    exporters:
      debug: {}
      logging: {}
      otlp:
        endpoint: jaeger-development-collector:4317
        tls:
          insecure: true
      prometheus:
        endpoint: 0.0.0.0:8889
    extensions:
      health_check:
        endpoint: ${env:MY_POD_IP}:13133
    processors:
      batch: {}
      memory_limiter:
        check_interval: 5s
        limit_percentage: 80
        spike_limit_percentage: 25
      spanmetrics:
        metrics_exporter: prometheus
    receivers:
      jaeger:
        protocols:
          grpc:
            endpoint: 0.0.0.0:14250
          thrift_compact:
            endpoint: 0.0.0.0:6831
          thrift_http:
            endpoint: ${env:MY_POD_IP}:14268
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
      otlp/spanmetrics:
        protocols:
          grpc:
            endpoint: localhost:65535
      prometheus:
        config:
          scrape_configs:
          - job_name: opentelemetry-collector
            scrape_interval: 10s
            static_configs:
            - targets:
              - ${env:MY_POD_IP}:8888
      zipkin:
        endpoint: ${env:MY_POD_IP}:9411
    service:
      extensions:
      - health_check
      pipelines:
        logs:
          exporters:
          - debug
          processors:
          - memory_limiter
          - batch
          receivers:
          - otlp
        metrics:
          exporters:
          - debug
          processors:
          - memory_limiter
          - batch
          receivers:
          - otlp
          - prometheus
        metrics/spanmetrics:
          exporters:
          - prometheus
          receivers:
          - otlp/spanmetrics
        traces:
          exporters:
          - otlp
          processors:
          - spanmetrics
          - batch
          receivers:
          - jaeger
      telemetry:
        metrics:
          address: ${env:MY_POD_IP}:8888

@dashpole
Copy link
Contributor

Prometheus scrapes all pods :8889/metrics , but there are no metrics. I tried curl :8889/metrics - return 200 code and still no metrics.. In the agent logs:

Are you asking about the prometheus exporter, or the prometheus receiver?

@P1ton
Copy link
Author

P1ton commented Jan 25, 2024

Prometheus scrapes all pods :8889/metrics , but there are no metrics. I tried curl :8889/metrics - return 200 code and still no metrics.. In the agent logs:

Are you asking about the prometheus exporter, or the prometheus receiver?

I'm asking about the prometheus exporter. I expect that https://podip:8889/metrics will have trace metrics which I can using Prometheus target

@dashpole dashpole added exporter/prometheus and removed receiver/prometheus Prometheus receiver labels Jan 25, 2024
Copy link
Contributor

Pinging code owners for exporter/prometheus: @Aneurysm9. See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Mar 26, 2024
@atoulme atoulme removed the needs triage New item requiring triage label Apr 5, 2024
@github-actions github-actions bot removed the Stale label Apr 6, 2024
Copy link
Contributor

github-actions bot commented Jun 6, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jun 6, 2024
Copy link
Contributor

github-actions bot commented Aug 5, 2024

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants