Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

syncTargetAllocator in prometheus metrics receiver doesnt detect regex changes in metrics relabel config #29313

Closed
rashmichandrashekar opened this issue Nov 16, 2023 · 12 comments
Assignees
Labels
bug Something isn't working closed as inactive receiver/prometheus Prometheus receiver Stale

Comments

@rashmichandrashekar
Copy link
Contributor

rashmichandrashekar commented Nov 16, 2023

Component(s)

receiver/prometheus

What happened?

Description

When prometheus scrape config is updated for metric relabel config with just regex change, the prometheus metrics receiver doesn't update the hash and hence doesnt pick up the metrics relabel config with the new regex.

Steps to Reproduce

With oteloperator's targetallcoator component enabled, update just the regex field in the scrape job to a different value. The prometheus receiver doesnt get the updated regex in the metric relabel config.

Expected Result

New config should have updated regex for metric relabeling.

Actual Result

The new regex is not picked up by the prometheus metrics receiver.

Collector version

v0.85.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration - Configuration applied to the otel collector's operator config

scrape_configs:
  - job_name: kube-proxy
    scrape_interval: 30s
    label_limit: 63
    label_name_length_limit: 511
    label_value_length_limit: 1023
    kubernetes_sd_configs:
    - role: pod
    relabel_configs:
    - action: keep
      source_labels:
      - __meta_kubernetes_namespace
      - __meta_kubernetes_pod_name
      separator: "/"
      regex: kube-system/kube-proxy.+
    - source_labels:
      - __address__
      action: replace
      target_label: __address__
      regex: "regex1"
      replacement: "$$1:10249"

scrape_configs:
  - job_name: kube-proxy
    scrape_interval: 30s
    label_limit: 63
    label_name_length_limit: 511
    label_value_length_limit: 1023
    kubernetes_sd_configs:
    - role: pod
    relabel_configs:
    - action: keep
      source_labels:
      - __meta_kubernetes_namespace
      - __meta_kubernetes_pod_name
      separator: "/"
      regex: kube-system/kube-proxy.+
    - source_labels:
      - __address__
      action: replace
      target_label: __address__
      regex: "regex2"
      replacement: "$$1:10249"

No response

Log output

No response

Additional context

This is related to this issue. The same fix needs to be made here

@rashmichandrashekar rashmichandrashekar added bug Something isn't working needs triage New item requiring triage labels Nov 16, 2023
@github-actions github-actions bot added the receiver/prometheus Prometheus receiver label Nov 16, 2023
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dashpole
Copy link
Contributor

Can you reproduce this with the collector configuration? It is hard to tell if this is an issue with the collector or operator

@rashmichandrashekar
Copy link
Contributor Author

Can you reproduce this with the collector configuration? It is hard to tell if this is an issue with the collector or operator

Thanks @dashpole. This is only related to operator since the method syncTargetAllocator is only relevant to the operator's TargetAllocator component.

@swiatekm
Copy link
Contributor

@dashpole the problem here is that we calculate the hash for scrape configs from the target allocator (which we use to check if we need to reload the scrape manager) using hashstructure, which ignores private struct fields. However, the regular expression fields in a scrape config struct are private, so they're ignored for the purpose of calculating this hash. As a result, if only the regular expression changes in the scrape config, we don't apply the change.

We fixed a similar issue in the larget allocator by simply hashing the serialized scrape configs instead.

@dashpole
Copy link
Contributor

got it. Thanks @swiatekm-sumo for the clarification. Can you link to the similar issue?

@swiatekm
Copy link
Contributor

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

github-actions bot commented Apr 1, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Apr 1, 2024
@rashmichandrashekar
Copy link
Contributor Author

/label receiver/prometheus help wanted

Copy link
Contributor

github-actions bot commented Apr 1, 2024

Pinging code owners for receiver/prometheus: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

github-actions bot commented Jun 3, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jun 3, 2024
jpkrohling pushed a commit that referenced this issue Jul 11, 2024
…n relabel config (#32127)

**Description:** <Describe what has changed.>
Fixing a bug - With the Targetallocator component in prometheus
receiver, when prometheus scrape config is updated for metric relabel
config with just regex change, the prometheus metrics receiver doesn't
update the hash and hence doesn't pick up the metrics relabel config
with the new regex.
This is because the **regex** struct has unexported fields. This fix
similar to the fix made in
[opentelemetry-operator](open-telemetry/opentelemetry-operator#2171)
fixes this issue by taking into account the unexported fields as well.


**Link to tracking Issue:** - #29313 

**Testing:** Tested to make sure that just the regex changes in metric
relabeling gets picked up with TargetAllocator enabled.

Reopening this PR since the old
one(#30258)
got closed due to inactivity.

---------

Co-authored-by: David Ashpole <[email protected]>
Copy link
Contributor

github-actions bot commented Aug 2, 2024

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working closed as inactive receiver/prometheus Prometheus receiver Stale
Projects
None yet
Development

No branches or pull requests

3 participants