-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Flaking test] unit test TestStoreListResourceVersion #125406
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
kubernetes/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/store_test.go Line 255 in d145bf0
/sig api-machinery |
/sig etcd |
/assign @serathius |
I'm not sure if this is the proper way to fix it, but setting a limit in the ListOptions deflakes the test. The flakiness comes from the fact that the cache is sometimes not initialized in time which causes it to return a "too many requests". Perhaps in tests we could wait till the cache is ready in order to proceed? |
Right, setting a limit when a resource version is set would delegate the list to etcd when cache is not initialized based on the KEP description. It is due to this type of request is not that costly to etcd.
Looks like it is exactly how the wait cache ready is added in https://github.com/kubernetes/kubernetes/pull/124642/files#diff-d82df65cc0625a3f621333245e76fbc801b579eedd3d266fe36eeb99366b5680. See below kubernetes/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher_test.go Lines 472 to 479 in 9d8edca
So I think we should implement this same thing in |
Agree - waiting for cacher to be initialized when creating it here is a better fix for it: |
I thought about this as well, but waiting on I'm thinking to either add an option Do you guys have a better idea? Thanks. |
Exposing Wait(context) method sounds fine |
I have pushed the changes we discussed. Thank you all for your guidance. |
Which jobs are flaking?
pull-kubernetes-unit
Which tests are flaking?
TestStoreListResourceVersion in
k8s.io/apiserver/pkg/registry/generic: registry
Since when has it been flaking?
seems like July 7th 2024:
![image](https://private-user-images.githubusercontent.com/10359181/338112410-c65e82e9-9983-47a8-9a06-3256258efa0e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjI0Mjg0NTksIm5iZiI6MTcyMjQyODE1OSwicGF0aCI6Ii8xMDM1OTE4MS8zMzgxMTI0MTAtYzY1ZTgyZTktOTk4My00N2E4LTlhMDYtMzI1NjI1OGVmYTBlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MzElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzMxVDEyMTU1OVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU4MDk0NzRhMDFkMWYzY2U4ZDQxZjNiZjI0YmIxNzdkN2JjODViMmI1MzI3NGQ3N2ZjN2MzNzZmMmEwNjY3ODAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.76Y6vIrBpFx342tJy_j7TldOdWdvSG6HYSxDQaq0Y80)
Testgrid link
https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-unit
Reason for failure (if possible)
Don't know, here is the log:
Anything else we need to know?
No response
Relevant SIG(s)
/sig
The text was updated successfully, but these errors were encountered: