Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add .Values.persistentVolume.selfManaged so PVCs can be managed #120

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

uberspot
Copy link

outside of the helm-chart for consistency and to prevent accidental deletions by helm.

What this PR does / why we need it:

Currently the chart manages the PVCs for the statefulset. Which means any re-installation of the chart will result in data loss unless snapshots/backups have been taken beforehand.
This PR allows for opting out of that behavior and managing the PVCs separately from the chart.
Anyone setting selfManaged: true will have to have taken a backup of the data before hand OR have modified the owner annotations on the PV/PVC resources to prevent their deletion.

[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.

  • Chart Version bumped
  • e2e tests pass
  • Variables are documented in the README.md

Pawel Sarbinowski added 3 commits May 30, 2023 11:07
…de of the helm-chart for consistency and to prevent accidental deletions by helm.
@yekibud
Copy link
Contributor

yekibud commented Jun 21, 2023

How is this different from what persistentVolume.existingClaims provides?

@uberspot
Copy link
Author

Existing claims requires you to provide a ton of information for the actual PV objects and then tries to template them out. I tried using that and it was unnecessarily complex and unintuitive.
Most k8s environments would instead just have the user create a PVC themselves, PVs get generated from the CSI driver and the only thing this chart needs as input is the name of the PVC so it knows what to mount on the volume. It shouldn't need to manage PV or PVC objects if you want to use existing Claims.

@yekibud
Copy link
Contributor

yekibud commented Jun 21, 2023

Most k8s environments would instead just have the user create a PVC themselves, PVs get generated from the CSI driver and the only thing this chart needs as input is the name of the PVC so it knows what to mount on the volume. It shouldn't need to manage PV or PVC objects if you want to use existing Claims.

You mean from a parent chart? You would still have to template out the PV and PVC and specify an existing volumeID, etc. if you wanted to use an existing claim like from a backup/snapshot, wouldn't you? Or are you talking about manually creating PVCs with kubectl, or something? The current implementation allows you to do everything by simply providing values, which seems preferable to me, but I can't say if that represents "most k8s environments", so defer to others and the maintainers to weigh in. Your changes are pretty minor and straight forward, so probably fine to support both approaches, I suppose.

@uberspot
Copy link
Author

Yeah I'm talking about manually created PVCs that then generate PVs themselves. That's how the Trident CSI works for example https://github.com/NetApp/trident
All you need in that case is a self-declared PVC object. And this chart then only needs a pvc name so it can use it as a volume. That's the minimum needed input in that case. 👍

@uberspot
Copy link
Author

Anyone who could review this? :)

@willholley
Copy link
Member

@uberspot the linter picked up a couple of trailing spaces. If you can address the errors and rebase (you'll need to bump the chart version), I'll get this merged.

@clayvan
Copy link
Contributor

clayvan commented Sep 20, 2023

For what it's worth @yekibud I do think existingClaims supports most use cases, but I would agree its unintuitive to use as you have to dig through the templates to understand how to use it. At the very least it would be nice to see what an example existingClaims value would look like.

edit: I'm not even sure if either PRs work with couchdb in cluster mode? Isn't this block on the statefulset telling it to mount n number of volumes on EACH couchdb replica? For me I have a 3 node couchdb cluster, so i have 3 volumes across AZs, and should not all try to be mounted to a single pod? I'd love it if I'm wrong though because I'd like to import PVCs into this chart in some fashion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants