Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 give the front proxy a distinct config for direct(internal) shard communication. #2382

Merged

Conversation

p0lyn0mial
Copy link
Contributor

@p0lyn0mial p0lyn0mial commented Nov 18, 2022

Summary

The front proxy needs a direct and privileged way of authNZ with any shard. At the same time it needs to be able to validate an incoming certificate from a shard.

This PR allows the above by introducing a kubeconfig that holds a client certificate and a CA certificate for validating incoming certs.
The new kubeconfig can be passed via shards-kubeconfig flag.

Previously an incorrect configuration was used for communication with shards. As a result the proxy didn't work in a multi shard environment.

Related issue(s)

required by #342

fixes #2274

}
kubeConfig.Clusters = map[string]*clientcmdapi.Cluster{
"base": {
CertificateAuthority: filepath.Join(workDirPath, ".kcp/serving-ca.crt"),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

normally you would provide a server name here, we overwrite it for convenience

c.ShardsConfig, err = clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
&clientcmd.ClientConfigLoadingRules{ExplicitPath: c.Options.ShardsKubeconfig},
// We override the Server here so that the user doesn't have to specify unused server value
// The Server must have HTTPS scheme otherwise CA won't be loaded (see IsConfigTransportTLS method)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alternatively we could provide certificates via separate flags and construct the config manually

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would probably be my preference, but I'm ok merging as-is and adjusting later if we decide to go that route, unless @sttts @stevekuznetsov prefer making the change now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would require at least 2 flags, one for the client cert pair and the other one for the public CA's certificate.

On the other hand kubeconfig has an advantage of specifying other means of authNZ, although I'm not sure we would ever use i.e. tokens.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for merging as is.

@@ -88,6 +90,9 @@ func (o *Options) Validate() []error {
if o.MappingFile == "" {
errs = append(errs, fmt.Errorf("--mapping-file is required"))
}
if len(o.ShardsKubeconfig) == 0 {
errs = append(errs, fmt.Errorf("--shards-kubeconfig is required"))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the flag is now required

@@ -56,6 +57,7 @@ func (o *Options) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&o.MappingFile, "mapping-file", o.MappingFile, "Config file mapping paths to backends")
fs.StringVar(&o.RootDirectory, "root-directory", o.RootDirectory, "Root directory.")
fs.StringVar(&o.RootKubeconfig, "root-kubeconfig", o.RootKubeconfig, "The path to the kubeconfig of the root shard.")
fs.StringVar(&o.ShardsKubeconfig, "shards-kubeconfig", o.ShardsKubeconfig, "The path to the kubeconfig used for communication with other shards.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit - AFAICS these credentials are used to create ClusterWorkspace informers for all shards (including root), so I'd s/other/all here

It's still a little confusing given we also require root-kubeconfig, but perhaps that can be deprecated/removed in a future iteration

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, i'm not sure. In general we need a privileged credentials for making wildcard requests against all shards. In addition to that we also need access to the root cluster for identities.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not blocking on it, but as previously mentioned I'd adjust the description slightly e.g:

The path to the kubeconfig used for communication with all shards.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, fixed :)

&clientcmd.ClientConfigLoadingRules{ExplicitPath: c.Options.ShardsKubeconfig},
// We override the Server here so that the user doesn't have to specify unused server value
// The Server must have HTTPS scheme otherwise CA won't be loaded (see IsConfigTransportTLS method)
&clientcmd.ConfigOverrides{ClusterInfo: clientcmdapi.Cluster{Server: "https://fakeserver.io"}}).ClientConfig()
Copy link

@hardys hardys Nov 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fakeserver override here looks strange, is that needed, or can we just use whatever is specified in the shard kubeconfig?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see now that the server is empty in the file, if it can't remain empty here would it perhaps be better to default it to the root-shard URL?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, please see #2382 (comment) for alternatives.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But even if we stick with the kubeconfig approach instead of the separate flags (which personally I think is fine, and probably easier if folks already have a cert based root-kubeconfig that allows access to all shards), we could switch the fakeserver.io to the root shard URL, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, we could do that. I think we could do that just to read the new config and then erase the host value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to hear one more opinion on it. I'm not convinced we should change it to use the root shard URL.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope that's okay with you :)

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 22, 2022
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 29, 2022
@@ -56,6 +57,7 @@ func (o *Options) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&o.MappingFile, "mapping-file", o.MappingFile, "Config file mapping paths to backends")
fs.StringVar(&o.RootDirectory, "root-directory", o.RootDirectory, "Root directory.")
fs.StringVar(&o.RootKubeconfig, "root-kubeconfig", o.RootKubeconfig, "The path to the kubeconfig of the root shard.")
fs.StringVar(&o.ShardsKubeconfig, "shards-kubeconfig", o.ShardsKubeconfig, "The path to the kubeconfig used for communication with all shards")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add that the server name is replaced later on with the shard hostname.

@sttts
Copy link
Member

sttts commented Dec 1, 2022

/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 1, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 1, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sttts

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 1, 2022
@p0lyn0mial
Copy link
Contributor Author

/retest

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: e2e-sharded failed to list *v1alpha1.ClusterWorkspace: Unauthorized
5 participants