Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Retry pending nodes #2385

Merged
merged 21 commits into from
Mar 13, 2020
Merged

feat: Retry pending nodes #2385

merged 21 commits into from
Mar 13, 2020

Conversation

jamhed
Copy link
Contributor

@jamhed jamhed commented Mar 7, 2020

Checklist:

  • [a] Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this is a chore.
  • The title of the PR is (a) conventional, (b) states what changed, and (c) suffixes the related issues number. E.g. "fix(controller): Updates such and such. Fixes #1234".
  • I have written unit and/or e2e tests for my change. PRs without these are unlike to be merged.
  • Optional. I've added My organization is added to the README.
  • I've signed the CLA and required builds are green.

@alexec alexec self-assigned this Mar 8, 2020
@alexec alexec linked an issue Mar 8, 2020 that may be closed by this pull request
Copy link
Contributor

@alexec alexec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really good and I think people will really appreciate the ability to resume under these circumstances.

Can I make one ask? The controller code is complex and critical, any change to it can introduce bugs. Can you take a look at writing some unit tests or an e2e test please?

workflow/controller/operator.go Outdated Show resolved Hide resolved
workflow/controller/operator.go Outdated Show resolved Hide resolved
workflow/controller/operator.go Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Mar 8, 2020

Codecov Report

❗ No coverage uploaded for pull request base (master@7094433). Click here to learn what that means.
The diff coverage is 5.76%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master    #2385   +/-   ##
=========================================
  Coverage          ?   13.11%           
=========================================
  Files             ?       71           
  Lines             ?    25302           
  Branches          ?        0           
=========================================
  Hits              ?     3319           
  Misses            ?    21545           
  Partials          ?      438
Impacted Files Coverage Δ
...kg/apis/workflow/v1alpha1/zz_generated.deepcopy.go 0% <0%> (ø)
pkg/apis/workflow/v1alpha1/workflow_types.go 7.58% <0%> (ø)
workflow/controller/workflowpod.go 72.03% <0%> (ø)
pkg/apis/workflow/v1alpha1/generated.pb.go 0.45% <0%> (ø)
workflow/controller/operator.go 60.65% <20%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7094433...7cc7600. Read the comment docs.

@jamhed jamhed requested review from simster7 and alexec March 9, 2020 12:13
@@ -20,6 +21,11 @@ type FunctionalSuite struct {
fixtures.E2ESuite
}

func (s *FunctionalSuite) TearDownSuite() {
s.E2ESuite.DeleteResources(fixtures.Label)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

huh, should not need this, how odd

Copy link
Contributor

@alexec alexec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is brilliant. I'd like @simster7 to have the opportunity to a look before merging.

Copy link
Member

@simster7 simster7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just one question

test/e2e/manifests/mysql.yaml Show resolved Hide resolved
@simster7
Copy link
Member

Please hold off on merging this for a bit, want to take another look

@jamhed
Copy link
Contributor Author

jamhed commented Mar 11, 2020

@simster7 how is it going?

Copy link
Member

@simster7 simster7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern is that the location where we attempt to recreate the pod:

https://github.com/jamhed/argo/blob/b70952cf79bddf36b8b615e833a217d2ebf191bd/workflow/controller/operator.go#L1341-L1350

doesn't record the recreation as a "retry". This could be a problem if you want to limit the amount of times you want to retry creating the pod (as is the only way to limit this would be by setting a backoff.maxDuration flag, but we could also want a retryStrategy.limit).

I would suggest refactoring the code so that an attempt to recreate the pod is considered as a full retry. This would mean taking the code out of this larger if and letting these lines in the else run:

https://github.com/jamhed/argo/blob/b70952cf79bddf36b8b615e833a217d2ebf191bd/workflow/controller/operator.go#L1357-L1359

I think a natural place to retry creating the pod is in the execute{Container, Resource, Script} function itself. You seem to attempt to do that here:

https://github.com/jamhed/argo/blob/b70952cf79bddf36b8b615e833a217d2ebf191bd/workflow/controller/operator.go#L1661-L1664

Which I think might be a better approach. What do you think?

workflow/controller/operator.go Outdated Show resolved Hide resolved
workflow/controller/operator.go Outdated Show resolved Hide resolved
workflow/controller/operator.go Outdated Show resolved Hide resolved
workflow/controller/operator.go Outdated Show resolved Hide resolved
@jamhed
Copy link
Contributor Author

jamhed commented Mar 11, 2020

I would suggest refactoring the code so that an attempt to recreate the pod is considered as a full retry.

@simster7, these two use cases are different: to retry a failed pod, and retry the submission of the pod, so it isn't related to retryStrategy.limit, as semantic is different.

Suppose we have another parameter under retryStrategy to limit the number of resubmissions. Then imagine you have two pods, both of them require full namespace resources. Then there would be no way to tell when the first one is completed, thus making impossible to limit the number of resubmissions upfront.

Therefore, it's not a refactoring per se, but rather completely different functionality, and this is not what we want to have.

Copy link
Member

@simster7 simster7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these two use cases are different: to retry a failed pod, and retry the submission of the pod, so it isn't related to retryStrategy.limit, as semantic is different.

I see your point, I have two comments/questions on it:

  1. We use the term "Error" to denote a state when a pod fails because of anything other than its own code (e.g., the pod is deleted from the cluster, the pod image can't be pulled due to network issues, etc.). I would argue that failing to schedule a Pod because of resource quotas could be considered an "Error". Do we want to consider marking pods that failed to schedule due to a resource quota as an "Error"? This will cause them to be retried naturally using the existing retry mechanism.

  2. Regardless of if we mark these Pods as an "Error", I don't think we should allow an unbounded recreating of Pods in "Pending" state. Perhaps a solution would be to add retryStrategy.recreationLimit (or some better name) that makes the distinction that you're looking for. As of now I can see some problems with this suggestion.

I am leaning towards going with approach 1, I think it's cleaner and it fits with the existing mechanisms and user expectations. But I want to hear your opinions on this as you seem better informed in this use case than me.

workflow/controller/operator.go Outdated Show resolved Hide resolved
@jamhed
Copy link
Contributor Author

jamhed commented Mar 12, 2020

@simster7, to me the difference is substantial: in one case pod exists (=consumes resources), and in another pod doesn't exist (=no resources allocated). I remember once our cluster went down because it had unlimited retries in retryStrategy, and there were an error in pod itself, so I definitely do not want to retry resubmission the same way I want to retry failed pods.

To make it perfect we need to have specific error, like QuotaError, and this requires to understand the error returned by Kubernetes API in more detail (=not how it is right now), and I sort of implemented it by passing forbidden to caller side, instead of wrapping it into argo error losing the details required.

To lump together submission errors (forbidden) and all other errors will make it unusable for our case (we have namespaces with limits by default for all our teams).

Please help me understand why do you want to limit the number of resubmissions due to forbidden reason. The only problem I see here it can overload Kubernetes API, but in that case I would prefer to rate limit resubmissions instead of limiting the number of them. However, it requires a different strategy (workflow controller needs to receive periodic updates somehow), and afaik this is not available as of now.

So, the approach suggested is not a perfect solution, but rather a practical compromise, as this issue is really a show-stopper for us (and it seems not only for us).

@jamhed
Copy link
Contributor Author

jamhed commented Mar 12, 2020

@simster7 @alexec, so you have a slack, zoom or similar available? I really think it makes sense to discuss it live.

@jamhed
Copy link
Contributor Author

jamhed commented Mar 12, 2020

@simster7, wrt to script tasks, here is the definition: https://github.com/argoproj/argo/blob/master/examples/retry-script.yaml
It looks custom to me (e.g. no way to set resource limits), unlike workflow template definition (=standard kubernetes pod type with resource limits available).

@jamhed
Copy link
Contributor Author

jamhed commented Mar 12, 2020

Do you see this as a problem?

@simster7, I do, I'd suggest having a feature toggle for now (I'm thinking workflow label/annotation?), and make it default in future releases (because tbh this is what I would expect from pod orchestrator). What do you think?

@simster7
Copy link
Member

@simster7, I do, I'd suggest having a feature toggle for now (I'm thinking workflow label/annotation?), and make it default in future releases (because tbh this is what I would expect from pod orchestrator). What do you think?

Let's do it then! Perhaps as a label in retryStrategy? Something like retryStrategy.recreatePendingPods: true?

Once this feature toggle is set and #2385 (comment) is addressed we can merge this in.

@jamhed jamhed requested a review from simster7 March 12, 2020 23:37
Copy link
Member

@simster7 simster7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three smaller comments, then we should be good to go. Thanks for all the hard work!

Please see below.

workflow/controller/operator.go Outdated Show resolved Hide resolved
workflow/controller/operator.go Outdated Show resolved Hide resolved
workflow/controller/operator.go Outdated Show resolved Hide resolved
@simster7
Copy link
Member

Hey @jamhed. I just made three changes:

  1. I moved the flag out of retryStrategy (and renamed it to resubmitPendingPods) and into the template spec. The logic for this is simple (and based on your comments): resubmitting pods is different than retrying nodes. Suppose you are a user who does not want to retry a failed pod, but wants to resubmit a pod under resource quotas. If the flag to do so is inside retryStrategy, you will inadvertently also be enabling retry logic, which you may not want. If you do not want retry logic, you will also be forced to add a limit: 1 flag to avoid your container from being retried indefinitely (I can see you had to resort to that in your test). This is not obvious to the user and could result in a user accidentally allowing their container to be retried indefinitely, which is dangerous.

  2. I greatly simplified the resubmission logic. Now all of it fits inside executeContainer (thanks in part by moving this out of retryStrategy).

  3. I edited your test to reflect this, and changed the E2E testing fixture to accommodate resourcequotas now being part of tests.

Could you please take a look at the PR now and let me know of any comments you may have? If you OK this, we'll merge it in.

@simster7 simster7 assigned simster7 and unassigned alexec Mar 13, 2020
@jamhed
Copy link
Contributor Author

jamhed commented Mar 13, 2020

@simster7 it was there for a reason :) i'm quite certain it won't work this way for templates with retryStrategy.limit = 1

Copy link
Member

@simster7 simster7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jamhed and I chatted offline and he gave this the green light

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

About failures due to exceeded resource quota
3 participants