Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Cache mechanism #351

Open
jneo8 opened this issue Mar 15, 2024 · 3 comments · May be fixed by #355
Open

[Feature Request] Cache mechanism #351

jneo8 opened this issue Mar 15, 2024 · 3 comments · May be fixed by #355

Comments

@jneo8
Copy link

jneo8 commented Mar 15, 2024

relate to: #214

We propose introducing an optional caching feature to the exporter to address slow queries, which can be activated as follows:

# Cache metrics by name
openstack-exporter --cached-metrics=AAA,BBB

# Cache metrics by service 
openstack-exporter --cached-services=AAA,BBB

# Global cache
openstack-exporter --cached --cached-ttl 60s

This feature would entail a background process that collects metrics every 5 minutes (customizable) and stores them in a file or memory. Cached values would be returned upon Prometheus scraping, provided the cache is ready.

We're eager to assist with its implementation but seek the maintainer's approval on whether this would be a valuable addition.

Note: Our main purpose is to have a cache mechanism, so not necessarily implemented as described above

@fprzewozny
Copy link

That would be cool!

@frittentheke
Copy link
Contributor

I like the idea in general. But if the exporter was to collect all the data independently from the Prometheus scraping I suggest to also introduce some kind of worker pattern to split up the collection of all those metrics.
I know not all OpenStack APIs support parameters to collect a subset of metrics, but without this the exporter can not scale indefinitely.

@jneo8
Copy link
Author

jneo8 commented Mar 20, 2024

Hi @frittentheke . Thanks for your feedback. I want to make sure we are on the same page here.

But if the exporter was to collect all the data independently from the Prometheus scraping I suggest to also introduce some kind of worker pattern to split up the collection of all those metrics.

When you mention about split up the collection of all those metrics, what is the granularity here? We separate by the service and run query parallel? Or you want to have a more small granularity(collect subset of metrics through openstack-api, which means re-write the collector)?

I think we can limit the scope here, we still use the same collector and provide a cache here. The collector optimization can be another issue. What do you think?

(We are also thinking about the scaling problem, so it's nice if we can make it scalable)

@jneo8 jneo8 linked a pull request Mar 29, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants