Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster setup #42

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Cluster setup #42

wants to merge 5 commits into from

Conversation

flimzy
Copy link
Member

@flimzy flimzy commented Sep 21, 2020

What this PR does / why we need it:

This PR adds the clusterSetup variable, which defaults to false. When enabled, a Helm post-install hook is initialized which waits for the _up endpoint to respond with a 200, then finalizes cluster setup by sending the requisite POST to /_cluster_setup.

It's quite possible there are better ways to do some of the things I've done here. I'm open to any and all feedback.

Which issue this PR fixes

Fixes #41

Special notes for your reviewer:

Checklist

[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.

  • Chart Version bumped
  • e2e tests pass
  • Variables are documented in the README.md
  • Chart tgz added to /docs and index updated

Copy link
Member

@willholley willholley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This basically looks good - thanks! There are a few edge cases to handle:

  • _up might be protectd by require_valid_user: true.
  • adminPassword might not match what's deployed. This is a hangover from allowing users to set the admin hash explicitly to ensure that all pods use an identical value, but means they need to ensure the admin password is also passed to the chart.

We can verify the behaviour by running the Kind tests and adding a new scenario to https://github.com/apache/couchdb-helm/tree/master/couchdb/ci

couchdb/templates/clustersetup.yaml Outdated Show resolved Hide resolved
@flimzy
Copy link
Member Author

flimzy commented Sep 21, 2020

E2E tests do pass for me, once I apply this small patch. Should I add this to the PR?

--- a/test/kind-config.yaml
+++ b/test/kind-config.yaml
@@ -1,5 +1,5 @@
 kind: Cluster
-apiVersion: kind.sigs.k8s.io/v1alpha3
+apiVersion: kind.x-k8s.io/v1alpha4
 nodes:
   - role: control-plane
   - role: worker

@flimzy
Copy link
Member Author

flimzy commented Sep 21, 2020

  • adminPassword might not match what's deployed. This is a hangover from allowing users to set the admin hash explicitly to ensure that all pods use an identical value, but means they need to ensure the admin password is also passed to the chart.

Is there anything we can do about this, other than documenting the behavior?

@willholley
Copy link
Member

E2E tests do pass for me, once I apply this small patch. Should I add this to the PR?

--- a/test/kind-config.yaml
+++ b/test/kind-config.yaml
@@ -1,5 +1,5 @@
 kind: Cluster
-apiVersion: kind.sigs.k8s.io/v1alpha3
+apiVersion: kind.x-k8s.io/v1alpha4
 nodes:
   - role: control-plane
 E2E tests do pass for me, once I apply this small patch. Should I add this to the PR?

Yes please. Bear in mind the tests will use the default values, so won't execute with clusterSetup: true unless it's added as a scenario in https://github.com/apache/couchdb-helm/tree/master/couchdb/ci. Essentially, the E2E tests are executed using each file as the values.yaml input to Helm, so you can either add a new file to test a specific set of values or incorporate the new setting into an existing test.

@flimzy
Copy link
Member Author

flimzy commented Sep 21, 2020

I seem to have spoken prematurely about the passage of E2E tests. I see now there is an error. I get the same error when running the tests against master (with the same patch I mentioned above). Should I open a separate issue for this?

$ make test
./test/e2e-kind.sh
Running ct container...
2e66ed0293436ba70915cb3e6620a618076384a559099f840967ce2cb0b8f1d9

Deleting cluster "chart-testing" ...
Creating cluster "chart-testing" ...
 ✓ Ensuring node image (kindest/node:v1.18.2) 🖼
 ✓ Preparing nodes 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
 ✓ Waiting ≤ 1m0s for control-plane = Ready ⏳ 
 • Ready after 0s 💚
Set kubectl context to "kind-chart-testing"
You can now use your cluster with:

kubectl cluster-info --context kind-chart-testing

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
Copying kubeconfig to container...
error: Missing or incomplete configuration info.  Please point to an existing, complete config file:

  1. Via the command-line flag --kubeconfig
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
Removing ct container...
Deleting cluster "chart-testing" ...
Done!
make: *** [Makefile:31: test] Error 1

@willholley
Copy link
Member

I wonder if this has rotted against the latest kind verison. master passes for me using:

$ kind version
kind v0.8.1 go1.14.2 darwin/amd64

This does mean that we now send credentials for the /_up endpoint even when
require_valid_user is not set, but this should not hurt anything.
@krjackso
Copy link

I ran into this same issue, is this fix still being considered?

@wohali wohali changed the base branch from master to main November 5, 2020 17:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

(Optionally) run _cluster_setup after installation
3 participants