-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unusually High Core Usage with Upstream Docker Image for helm operator. #6679
Comments
@r4rajat Could you specify what version (and SHA) you are referring to for the respective upstream and downstream latest tags? Also, it would be super helpful if you could run a profiler and let us know what is actually consuming the memory. Due to a shortage of active maintainers, its maybe really difficult for us to pick this up. Any help in solving this would be greatly appreciated! |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Bug Report
What did you do?
I am creating an helm based operator for Redhat Openshift. Earlier I was using the downstream image i.e
registry.redhat.io/openshift4/ose-helm-operator:latest
as my base image and myoperator-controller-manager
deployment was taking around 0.06-0.08 cores,Then I updated my base image to the upstream image i.e
quay.io/operator-framework/helm-operator:latest
and the core usage for the sameoperator-controller-manager
increased very much to like 0.8-0.9 cores.What did you expect to see?
Usual core usage around 0.05-0.06
What did you see instead? Under which circumstances?
Very high core usage around 1
Environment
Operator type:
/language helm
Kubernetes cluster type:
OpenShift v4.13.4
$ operator-sdk version
$ go version
(if language is Go)$ kubectl version
Possible Solution
Additional context
The text was updated successfully, but these errors were encountered: