-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cardinality or memory limit for prometheus exporters #33540
Labels
enhancement
New feature or request
exporter/prometheus
exporter/prometheusremotewrite
needs triage
New item requiring triage
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This was referenced Jun 19, 2024
Hi, isn't the memory released via the |
Yes, it is released, but metrics could be too much that reached the very high level for a short time even before any metric expires |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
enhancement
New feature or request
exporter/prometheus
exporter/prometheusremotewrite
needs triage
New item requiring triage
Component(s)
exporter/prometheus, exporter/prometheusremotewrite
Is your feature request related to a problem? Please describe.
When the collector receives metrics, it occupies a portion of the memory, and when the workload stops sending metrics, this part of the memory is not released.
Memory growth may lead to memory limits being exceeded or excessively frequent garbage collection (GC), resulting in efficiency issues. Additionally, an excess of useless metrics can also cause storage and memory pressure on Prometheus.
Describe the solution you'd like
Provided a method to automatically expire related metrics at the collector level, it would alleviate the pressure on both the collector and Prometheus simultaneously.
Describe alternatives you've considered
Setting a cardinality limit could be an approach. If this limit is exceeded, the process should either exit or clean up the metrics. Developers can monitor the restart of the process to detect potential issues in real time.
Additional context
could be related to: #32511 #33324
The text was updated successfully, but these errors were encountered: