-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ResourceQuota error isn't reflected in kservice status #4416
Comments
Thanks for the bug report. This looks like something that we should bubble up into the Service status. Added API label and moved into Serving 0.8. |
/reopen due to this comment https://knative.slack.com/archives/CA4DNJ9A4/p1595251772209500 |
@dprotaso: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/unassign |
/good-first-issue It seems like this should be fairly easy to write a test for:
/triage accepted |
@evankanderson: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
One thing to note is that the failure won't show until the pod's progress deadline is exceeded. The default value is 10 minutes, so it'll take some time for it to fail (though the progress deadline can be configured lower either globally or on a per-revision basis). That said, the error message still doesn't reference the resource quota issue. Instead, it will be something like |
@psschwei any advice how to fix this? We’d like to contribute the fix if possible. |
seems the deployment status is only propagated if the revision is active? |
To get that info into the error message, we'd need to somehow propagate the deployment info into the initial scale error message (which is created here). Off the top of my head, not sure what the best way to do that would be... |
Would it be an good first step to mirror that Deployment status in the associated Revision? We then still can think about how to propagate that back to a Service (which btw can be associated to multiple revisions via traffic split, so I don't necessary think that deployment-related errors should bubble up until the service, except when we would collect them in a list) |
I went back and looked at this on v1.6, and it looks like the quota errors are showing on both the revision and the service. Revision: $ k get revision -n rq-test hello-00001 -o json | jq .status.conditions
[
{
"lastTransitionTime": "2022-08-04T13:46:07Z",
"message": "The target is not receiving traffic.",
"reason": "NoTraffic",
"severity": "Info",
"status": "False",
"type": "Active"
},
{
"lastTransitionTime": "2022-08-04T13:45:27Z",
"status": "Unknown",
"type": "ContainerHealthy"
},
{
"lastTransitionTime": "2022-08-04T13:45:27Z",
"message": "pods \"hello-00001-deployment-54bf4b6774-g8l79\" is forbidden: exceeded quota: rq-e2e-test, requested: cpu=525m, used: cpu=0, limited: cpu=50m",
"reason": "FailedCreate",
"status": "False",
"type": "Ready"
},
{
"lastTransitionTime": "2022-08-04T13:45:27Z",
"message": "pods \"hello-00001-deployment-54bf4b6774-g8l79\" is forbidden: exceeded quota: rq-e2e-test, requested: cpu=525m, used: cpu=0, limited: cpu=50m",
"reason": "FailedCreate",
"status": "False",
"type": "ResourcesAvailable"
}
] Service: $ k get ksvc -n rq-test hello -o json | jq .status.conditions
[
{
"lastTransitionTime": "2022-08-04T13:45:27Z",
"message": "Revision \"hello-00001\" failed with message: pods \"hello-00001-deployment-54bf4b6774-g8l79\" is forbidden: exceeded quota: rq-e2e-test, requested: cpu=525m, used: cpu=0, limited: cpu=50m.",
"reason": "RevisionFailed",
"status": "False",
"type": "ConfigurationsReady"
},
{
"lastTransitionTime": "2022-08-04T13:45:27Z",
"message": "Configuration \"hello\" does not have any ready Revision.",
"reason": "RevisionMissing",
"status": "False",
"type": "Ready"
},
{
"lastTransitionTime": "2022-08-04T13:45:27Z",
"message": "Configuration \"hello\" does not have any ready Revision.",
"reason": "RevisionMissing",
"status": "False",
"type": "RoutesReady"
}
] Off the top of my head, not sure what exactly changed between 1.4 and 1.6 to get these in there, but in there they are 😄 |
ah, great. So I guess we could close this issue then ? Would be great to find out though when the fix went in ;-) |
Checking on what the exact fix was... not showing in v1.5, so it was something in the last release |
@dprotaso Is this issue still valid? Is there anyone working on it? |
/unassign @dprotaso I'm currently not working on this - it is up for grabs |
/assign |
/assign @xiangpingjiang Are you going to take over this issue? |
@houshengbo yes, I want have a try |
/assign |
/assign |
Probably fixed here: #14453 It does bubbles up the quota limit errors, too. I'm adding this as fixed issue to the PR |
@gabo1208 were you able to follow up if your changes fixes this issue? |
Let me test between today and Friday this exact case, but should be fixed, I'll update the issue with the results @dprotaso |
with this service:
yields this:
|
In what area(s)?
/area API
What version of Knative?
HEAD
Expected Behavior
Default namespace contains a LimitRange that limits defaultRequest CPU to 100m. Created a ResourceQuota in the same namespace with CPU quota set to 50m. Tried to serve requests to an app deployed in the same namespace. Expected to see an error message when running
kubectl get kservice
orkubectl get pods
saying that there was a failure since the resourcequota was exceeded.Actual Behavior
Cannot hit the service (loading is stuck).
kubectl get kservice
shows the app as Ready, with no mention of the quota error in the status. No mention of pod creation failure. Only digging further down and looking at the yaml of the deployment shows the error.Status of kservice:
Status of deployment:
Steps to Reproduce the Problem
I believe the issue might be related to how the deployment is being reconciled. It looks like there is an "Error getting pods" message that is getting logged but the status of the revision/kservice do not get updated. Also the logic is checking that
deployment.Status.AvailableReplicas == 0
, which might not match all cases where pod creation has failed (for example, if 2 replicas have already been created, and the 3rd replica exceeds the ResourceQuota limit). Would it be possible to use theUnavailableReplicas
value in the deployment instead?Code for reference: https://github.com/knative/serving/blob/master//pkg/reconciler/revision/reconcile_resources.go#L36:22
The text was updated successfully, but these errors were encountered: