Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

signalfxexporter: Prometheus Summary Type Not Implemented #2944

Closed
alanbrent opened this issue Mar 30, 2021 · 1 comment
Closed

signalfxexporter: Prometheus Summary Type Not Implemented #2944

alanbrent opened this issue Mar 30, 2021 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@alanbrent
Copy link

Describe the bug

The Prometheus Metric Summary type is not implemented in the signalfxexporter.

See converter.go for proof of the missing type.

Steps to reproduce

  1. Setup otelcol-contrib to receive Prometheus metrics that you know contain summary metrics and export them to the signalfxexporter
  2. Notice that all summary metric types are missing.

What did you expect to see?

I expected to see the summary metrics appear in SignalFx along with the other metrics.

What did you see instead?

The summary metrics did not appear. Non-summary metrics did appear.

What version did you use?

v0.23.0

What config did you use?

Standard YAML config file, similar to the below (our airflow reports statsd metrics to a Prometheus statsd-exporter container)

receivers:
  prometheus/airflow:
    config:
      scrape_configs:
        - job_name: "prom-airflow"
          scrape_interval: 10s
          kubernetes_sd_configs:
            - role: service
          relabel_configs:
            - source_labels: [__meta_kubernetes_service_port_name]
              regex: prometheus-scrape-endpoint
              action: keep
            - source_labels: [__address__]
              action: replace
              target_label: instance
              regex: (.+)
              replacement: $1:9102
exporters:
  signalfx/airflow:
    access_token: ${SFX_ACCESS_TOKEN_AIRFLOW}
    realm: us0
processors:
  batch:
  memory_limiter:
    ballast_size_mib: ${OTELCOL_MEMORY_LIMITER_BALLAST_SIZE_MIB}
    limit_mib: ${OTELCOL_MEMORY_LIMITER_LIMIT_MIB}
    spike_limit_mib: ${OTELCOL_MEMORY_LIMITER_SPIKE_LIMIT_MIB}
    check_interval: ${OTELCOL_MEMORY_LIMITER_CHECK_INTERVAL_SECONDS}s
  k8s_tagger:
pipelines:
    metrics/airflow:
      receivers: [prometheus/airflow]
      processors: [memory_limiter, batch, k8s_tagger]
      exporters: [signalfx/airflow]

Environment

We use the official otelcolcontrib container image with some lightweight tooling added in.

Additional context

  1. Thanks to @jrcamp for the Slack discussion here
  2. Originally I thought the issue was with the prometheusreceiver, but I narrowed it down to the signalfxexporter by using the file exporter and confirming that the metrics are actually picked up by the receiver.
@alanbrent alanbrent added the bug Something isn't working label Mar 30, 2021
@alanbrent
Copy link
Author

Closing after confirming functionality in the v0.24.0 release, which contains the changes in #2998.

Thanks for the fix, we were able to clear out some tech debt today because of it!

punya pushed a commit to punya/opentelemetry-collector-contrib that referenced this issue Jul 21, 2021
* Add logs support on Kafka receiver

* Update README

* feedback - public logsUnmarshallers
alexperez52 referenced this issue in open-o11y/opentelemetry-collector-contrib Aug 18, 2021
* Add logs support on Kafka receiver

* Update README

* feedback - public logsUnmarshallers
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants