-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 test-server: split New/Start/Ready phases #2303
🌱 test-server: split New/Start/Ready phases #2303
Conversation
56e17a1
to
022e32f
Compare
/cc @ncdc @p0lyn0mial next incremental change derived from the shard-standalone-vw branch Just to clarify - is it correct that when we switch to the standalone VW server, we can derive the availability of the VW server via the shard readyz? Just wanted to ensure I correctly understood the rationale for this change as I'm not super familiar with the shard/VW bootstrapping atm. |
@hardys: GitHub didn't allow me to request PR reviews from the following users: incremental, derived, next, change, from, the, branch. Note that only kcp-dev members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retitle 🌱 test-server: split New/Start/Ready phases |
That doesn't seem right - they would be slightly interdependent at startup with e.g. apibinding initializer, but I wouldn't expect the shard -> vw flow. |
terminatedCh := make(chan error, 1) | ||
s.terminatedCh = terminatedCh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think we could do s.terminatedCh = make(chan error, 1)
terminatedCh := make(chan error, 1) | ||
s.terminatedCh = terminatedCh | ||
go func() { | ||
terminatedCh <- cmd.Wait() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then we could use s.terminatedCh
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this will work, because s.terminatedCh
is a receive-only channel, so we need to use the unconstrained version here
The standalone VW server is a distinct binary, so we need to check readyz separately. When the VW server is run in-process it registers a startup hook so checking a shard readyz endpoint is enough. |
default: | ||
} | ||
|
||
// intentionally load again every iteration because it can change | ||
kubeconfigPath := filepath.Join(s.runtimeDir, "admin.kubeconfig") | ||
configLoader := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(&clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfigPath}, | ||
&clientcmd.ConfigOverrides{CurrentContext: "system:admin"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, I think it works because the rest client gets an absolute path (we don't append clusters/root
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it works because this is a valid context for the shard admin.kubeconfig (it's the shard-admin user), but it's not for the front-proxy one?
$ grep system:admin .kcp/admin.kubeconfig
$ grep -B3 system:admin .kcp-0/admin.kubeconfig
- context:
cluster: base
user: shard-admin
name: system:admin
/lgtm |
Ok thanks - I guess this decoupling was added by @ncdc as a step towards that, but I don't see any check of the standalone VW server readyz in the branch, we can add that as a follow-up in the PR which enables standalone VW |
Decouple the steps to start the test server so we can define a new test-server, start it then call WaitForReady to ensure shard readiness Co-authored-by: Andy Goldstein <[email protected]>
022e32f
to
ef28620
Compare
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sttts The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Follow-up to kcp-dev#2303 - this aligns the proxy handling with the revised shard handling
Follow-up to kcp-dev#2303 - this aligns the proxy handling with the revised shard handling
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Align with the shard WaitForReady pattern ref kcp-dev#2303 as a step towards enabling multiple VW servers when there are multiple shards
Summary
Decouple the steps to start the test server so we can define a new test-server, start it then call WaitForReady to ensure shard readiness
This is another change decoupled from the shard-standalone-vw branch from @ncdc
Related issue(s)
Follow-up to #2297 and another incremental step towards integrating the the shard-standalone-vw branch from @ncdc
Fixes #