Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Manage CRDs #5871

Closed
mattfarina opened this issue Jun 11, 2019 · 47 comments
Closed

Proposal: Manage CRDs #5871

mattfarina opened this issue Jun 11, 2019 · 47 comments
Labels
proposal Stale v3.x Issues and Pull Requests related to the major version v3

Comments

@mattfarina
Copy link
Collaborator

I have to give credit to @prydonius for this idea. I'm just writing it up here. Can do a proposal to community if this needs to move forward. Not entirely sure what is feasible yet.

The idea:

Have a crd directory alongside the templates directory. Helm would then check for missing CRDs and attempt to install them. If it cannot install them and they are missing it would stop, explain why it can't install them (e.g., permissions), and provide some details (e.g., someone with permissions needs to install the CRDs).

This is because there is common wrapper logic needed around CRDs for each of them that is about the same for this. First, check if it's installed and if it's not then have it there with the crd install hook.

One unknown is handing changes in the CRD content at the same apiVersion. This happens in practice and has many potential problems. Can we detect a change and throw an error to require human intervention?

Once this is in place the Helm dependency system can be used to handle CRDs that are dependent upon one another. Each can be in their own chart with their own controllers.

@mattfarina mattfarina added proposal v3.x Issues and Pull Requests related to the major version v3 labels Jun 11, 2019
@mattfarina
Copy link
Collaborator Author

@bacongobbler
Copy link
Member

bacongobbler commented Jun 11, 2019

I agree that there is a common wrapper logic being used for all CRDs in Helm 2 recently. Taking a closer look at how CRDs are installed and managed would and how we can improve that process would be appreciated.

I would love to see a practical example! It seems like it's an idea worth investigating. I'm having a hard time conceptualizing this, so having a practical example to play with and understand the solution might be a good way forward to see if this is feasible or not (as well as provide feedback on the design).

@technosophos
Copy link
Member

As I see it, we have four major problems to sort out with CRDs:

  • Install order: CRDs have to be installed in a separate operation from any usage of (instances of) that CRD.
  • Compatibility detection: If a CRD exists, and a request to install the CRD comes in, but the api versions are different, what do we do?
  • Update process: CRDs are possibly global, while most other kinds that appear in Helm charts are not. So under what conditions can a chart that contains a CRD upgrade the CRD?
  • Deletion of a CRD: When a CRD is deleted, all of its instances are deleted. And CRDs are global. Under what conditions can Helm delete a CRD (clearly, it's not by default)

As far as I can tell, the above makes it easier to solve the first point. But is it in any way helping or hindering with the others? I don't want to take a shot at solving just one prong, only to discover that that solution prevented us from solving the others.

The proposal does not explain what the mechanics of the crd/ directory are. Are CRD YAML files treated as templates, too? Do they get the same context as other templates? Are they always evaluated BEFORE the others? And if so, can anything be injected from the CRD template eval into the context used for regular eval? What happens if a non-CRD is placed in the crd directory? For example, if I have a deployment that is a CRD controller, it is often necessary for that controller to be up and running before instances can be reliably processed. So do I put the controller objects in the CRD folder?

@mattfarina
Copy link
Collaborator Author

Just spitballing...

Install order: CRDs have to be installed in a separate operation from any usage of (instances of) that CRD.

The CRD directory would make it easy to figure out where they were to install them first. We would not need to parse all the templates first. This could extend to the whole dependency tree.

Compatibility detection: If a CRD exists, and a request to install the CRD comes in, but the api versions are different, what do we do?

This is the hard one. There is a whole page in the docs on versioning CRDs. While I'm open to suggestions, I would argue we notify the user and let them decide what to do.

Update process: CRDs are possibly global, while most other kinds that appear in Helm charts are not. So under what conditions can a chart that contains a CRD upgrade the CRD?

When there is a change such as a new apiVersion, change in storage/serving, etc. I would propose this is an operation that notifies the user of the coming change and asks them to accept it. A human in the loop because this could be a destructive change.

Deletion of a CRD: When a CRD is deleted, all of its instances are deleted. And CRDs are global. Under what conditions can Helm delete a CRD (clearly, it's not by default)

Always manual? Directions about deleting them with big flashers like we used to have on geocities for the under construction pages.

@technosophos you raised a lot of good questions. Ones worth airing and discussing publicly.

@prydonius
Copy link
Member

The CRD directory would make it easy to figure out where they were to install them first. We would not need to parse all the templates first. This could extend to the whole dependency tree.

Placing CRDs in a specific directory in the chart package gives us a more reliable way to handle them separately. For example, we could render all the CRDs in the chart and sub-charts first and then install them before installing the rest (just an example, the idea is we can use it to make the right decisions). I guess there's no reason we can't do this with the current crd-install hook, but I'm not sure what the future of hooks are in Helm 3 and I also feel like annotations aren't the best way of defining these if we have the opportunity to extend the chart format.

Completely agree with @mattfarina's comment, Helm should not try to do anything clever in most of these scenarios, I think the goal here would be to just provide more info to the operator so that they can decide how to proceed.

@paulczar
Copy link
Contributor

Have a crd directory alongside the templates directory - is this necessary? We know that a CRD has the following top level items in their declaration:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition

would it be within Helm's scope to look at the apiVersion/kind of resources and act on them based on the resource type? If we know its a CRD by the apiVersion/kind then we can treat it as a CRD, however we decide to treat them.

@mattfarina
Copy link
Collaborator Author

@paulczar If Helm is to read the apiVersion and kind of each object and know how to act on them it's a new level of introspection and ordering.

So, let's say you have a CRD that needs to be installed, then a controller to work on those CRDs, and then some CRs you have a definite ordering thing. You have to know a fair amount of intent. Do we need to tell people to break these into two charts?

Then there is the apiVersion. What we have now is a beta version and that will change. At the least is to go to GA. How do we handle this changing in the future?

@prydonius
Copy link
Member

Well FWIW Helm already defines the operation order per kind (https://github.com/helm/helm/blob/bed4054c412f95d140c8c98b6387f40df7f3139e/pkg/tiller/kind_sorter.go), so perhaps extracting the CRDs is not a terrible approach.

Not sure that the apiVersion matters that much, at least for core resources we can pretty much assume the kind name won't change or clash with anything (at least from beta onwards).

@paulczar
Copy link
Contributor

So putting crd at the top of the sort order would be a good start. Maybe helm can then check if the crd already exists with the name and apiVersion, and if so skip it, if not install it. We could potentially do a equivalent of kubectl auth can-i create crds to validate the user has permissions to install crds, and have a friendly error.

In a larger picture we could also check each resource kind/apiVersion in the chart against kubectl apiVersion, kubectl apiResources, kubectl get crd to validate that its doable on the cluster (maybe a set of kubectl auth can-i to validate the user is allowed to ?), if it doesn't it can error out.

@bacongobbler
Copy link
Member

bacongobbler commented Jun 14, 2019

So putting crd at the top of the sort order would be a good start.

One of the problems with CRDs today stems from the fact that the CRD and its children (the CRs) are defined within the same chart. During resource validation, the CRs fail API validation because the CRD does not exist yet therefore there's no API endpoint that can accept those CRs. #2994 and @mattfarina's earlier comment summarizes the issue fairly well here.

Throwing CRDs earlier in the sort order will not help in this situation. They need to be installed prior to validating CRs within the chart itself... Either that or tell people that CRDs and CRs need to be broken into two charts.

However, that does bring up a good point. What if we validated and installed each resource in sort order by it's GroupVersion?

We might be able to change the current logic in pkg/kube from

  1. render all templates as resources
  2. validate all resources together <-- where CRDs/CRs currently fail unless they were installed using the crd-install hook
  3. install resources based on sort order

To

  1. render all templates as resources
  2. based on sort order, for each GroupVersion
    a. validate the resources
    b. install all resources with the same GroupVersion
    c. GOTO next GroupVersion

It may be more computationally expensive, but it could solve the install order issue. Perhaps there may be issues with validating resources individually... @technosophos do you know of any limitations this approach may bring?

If that works, that would solve the "install order" issue and remove the need for a crd directory. That doesn't solve compatibility detection, update process or the CRD deletion use cases though, and I do agree that we should discuss those issues as well. I'd hate for us to come up with a solution for one use case that doesn't help solve the other three.

Then there is the apiVersion. What we have now is a beta version and that will change. At the least is to go to GA. How do we handle this changing in the future?

Did we have a v1beta2 to apps/v1 migration story for the workloads API? As far as I understand, handling apiVersion migrations is out of scope for kubernetes as the behaviour between different apiVersions may differ. That being said, there have been iterations in the past where very few changes occurred (like v1beta1 to v1beta2). I'm not sure how (or if) we can handle that.

@mattfarina
Copy link
Collaborator Author

Detecting if a CRD is installed and installing if not present is the easy problem to solve or bike shed on. Can we look at the hard problems instead? The things that don't have easy solutions.

For example, how do we handle updating CRDs? For example, A CRD is installed and there's an operator using it in one namespace. Now, someone is installing a second operator for the same thing in a different namespace without knowledge of the first install. The CRD for the second chart has new version information (maybe even a new storage version). Does Helm install the update? What will happen to the first operator? I think this has the potential to cause problems if I read https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/#writing-reading-and-updating-versioned-customresourcedefinition-objects correctly.

Or, How do we handle charts that have CRDs, controllers, and CRs based on the CRDs. In this case the install order needs to be the CRD, the controller responding to the CRD, and then any other assets. If the controller is run as a deployment than it needs to be sort ordered after the CRD and before any other assets. How do we determine that?

The Create step in the CRD CRUD is the easiest to solve. It's the RUD parts that are more complicated and we need proposed solutions for.

@bacongobbler
Copy link
Member

bacongobbler commented Jun 14, 2019

Can we look at the hard problems instead?

Absolutely. Thank you for calling us out on that.

For example, A CRD is installed and there's an operator using it in one namespace. Now, someone is installing a second operator for the same thing in a different namespace without knowledge of the first install. The CRD for the second chart has new version information (maybe even a new storage version). Does Helm install the update?

Ooh, good one. Yeah, this is a semi-common use case today where two applications rely on the same CRD. It's a good case of sharing common resources between multiple apps: how do the two reconcile a shared resource without stepping on eachother's toes?

If I'm not mistaken, this particular problem applies to all global resources including CRDs. For example, it's quite possible that two apps could create the same PersistentVolume or a similar StorageClass.

I guess the same problem might also be considered for non-globally scoped resources as well. I've heard users ask about how two applications installed in the same namespace can rely on a shared postgres instance in the same namespace. There might be similar issues there too.

Crazy thought here: what if Helm didn't manage globally-scoped resources, and left that to the cluster administrator? That way we leave the reconciliation of globally scoped resources to the cluster administrator. That would put Helm squarely as a userland tool, instead of the current system where Helm can be being used as a cluster management tool, managing the entire cluster's state for an application.

Other than that, I don't have any immediate solutions that come to mind. How do other package managers reconcile shared resources between two apps? Do they just tell package maintainers that it's out of scope?

@paulczar
Copy link
Contributor

It's a good case of sharing common resources between multiple apps: how do the two reconcile a shared resource without stepping on eachother's toes?

If we take a stance of A helm chart being a standalone package, then we probably shouldn't try and handle two apps sharing a resource, instead we would make it three apps.

In this case where the shared resource is a CRD, then the CRD would be the third app, and would have its own method of install ( whether it being a helm chart that's installed first, or some other method ).

That would make our stance something like "you can use helm to manage a CRD, but it should be its own discreet package, and not be part of an operator or other chart". The user could then reach for a higher order tool like helmfile to orchestrate composing infrastructure from multiple charts.

Crazy thought here: what if Helm didn't manage globally-scoped resources, and left that to the cluster administrator? That way we leave the reconciliation of globally scoped resources to the cluster administrator.

I'm starting to lean in this direct. A CRD is effectively adding to the kubernetes control plane, an app deployed by a "user" definitely shouldn't be trying to change the behaviour of the control plane, and the blast radius from a CRD CRUD operation is big enough that Helm probably shouldn't try and manage the lifecycle of them. Hell its a hard enough problem that the operator-sdk folks have built a tool specifically to deal with operator lifecycle management - https://github.com/operator-framework/operator-lifecycle-manager

@bacongobbler
Copy link
Member

From Slack, courtesy of @jzelinskie:

This is a rabbithole that OLM has tried to manage for years. I believe there's a working group where we're trying to refactor/redesign something that replaces the kube addon manager that handles lifecycles of CRDs

I don't think any decisions in Helm should be made without consulting that wg

@mattfarina
Copy link
Collaborator Author

@bacongobbler Thanks for chasing that down. Two thoughts...

While I'm glad people are thinking through the addon elements to CRDs, Brendan documented multiple use cases for these in a PR of which addons was only one of the cases.

How does this all work for the operator pattern? Take for example, the etcd operator. What if it is installed into more than one namespace by more than one team. How do we handle that case? It's not really scoped the same as a cluster addon.

@jzelinskie
Copy link
Contributor

jzelinskie commented Jul 3, 2019

How does this all work for the operator pattern? Take for example, the etcd operator. What if it is installed into more than one namespace by more than one team. How do we handle that case? It's not really scoped the same as a cluster addon.

This is something that has been flip-flopped on a bunch in the past development of OLM. At first, we tried to follow the target of least privilege. This meant that namespaces would each get their own instances of operators but OLM would manage conflicts with installing CRDs at the cluster-scope. Eventually, we were finding that many people cared about cluster-scoped operators, which have their own problems (they don't always scale to huge clusters, RBAC can be tricky, etc...). To satisfy both, OLM introduced yet another resource type OperatorGroups. This basically lets you define the set of namespaces for a single "tenant" of operators. We spent time in the multi-tenancy work group, but honestly, Kubernetes was not designed for real multi-tenancy and it's going to be REALLY hard if not impossible to add retroactively. OpenShift Online is probably the only real multi-tenant k8s that exists and it's a special snowflake that probably no one else should emulate.

@hickeyma
Copy link
Contributor

From @joejulian in #6065:

I had a user complain that he lost all his records that were associated with a CRD when a release was deleted. We changed his chart and set the crd-install hook. Later, this same user complained that he couldn't delete and reinstall the same chart because it failed to create the CRD (it was already there). Deleting the CRD via kubectl was not an option as it would lose all the records associated with the CRD and the user was uncomfortable with using kubectl as he was only familiar with using helm. Further, he wanted multiple releases of his software in other namespaces which, again, caused an error because the CRD was already installed.

In trying to preserve that user experience and, at the same time, that user's data we handled that by adding a value that could be changed to install the CRD (or not). This caused confusion because multiple users may install this chart into their namespace, but one has to install that chart with the CRD installation flag set. Which one? How do they know in advance?

Again, this was a less than happy user experience.

  • the user would like to install a chart
  • if there is no CRD, they would like it created
  • if there is a CRD, they would like the rest of the chart to be installed
  • they don't want to have to know in advance to set a flag to disable the creation of the CRD
  • they don't want to lose their data when a release is deleted. This data may be assocated with other releases.

Suggestion - since CRDs can only be installed once and don't hold state, I propose:

  • if a chart has a CRD defined, and that CRD is identical to the one installed, the installation of that object is skipped and the rest of the objects are installed as normal
  • if the CRD differs from the one installed, and no flags are set on install, the installation would fail stating that the CRD is installed and differs from the one in the chart.
  • if running helm upgrade with or without the install flag, if the CRD differs from the one installed tiller attempts to upgrade the CRD
  • if running helm upgrade with or without the install flag, if the CRD is identical to the one installed the installation of the CRD is skipped

@mattfarina
Copy link
Collaborator Author

Currently, Helm v2 does not handle CRDs. We are working to fix this before beta that will carry a new way to handle CRDs in Helm v3. Here is the proposal we are working on...

  1. CRDs will go in a new directory in a chart. Helm will attempt to install CRDs if they are not present in the cluster before rendering templates or generating the details for Capabilities within templates. In the Helm v2 capabilities are generated only once meaning that once CRDs are added they do not appear in the Capabilities object for other resources.

The two step process is because the installation of CRDs can't really be fully declarative. You have CRDs, controllers that act on them, and CRs leveraging those that need to be a bit ordered. Here we will provide the ability to provide CRDs in a manner that is obviously ordered and separate.

  1. The crd-install hook will not work in Helm v3. The apiVersion of v2 for charts will not support it. For those working with apiVersion of v1 for charts (the Helm v2 charts) there will be plugin that can be installed for Helm v3 that will provide support. The details of how the plugin will work are still in development.

CRD workflows are not fully baked within the community. Attempting to encode some experimental ideas of workflows in Helm is something we'd like to avoid. Hopefully we can encode some processes once they are baked in the community. To be baked I'd hope they are well documented, practiced, and generally agreed to.

I have some ideas that are not fully part of the plan. Maybe something for experimentation with a plugin to help move the ball forward in the future.

If there are questions or comments on this please feel free to speak up. There is quite a bit of nuance to the whole process and some subtle issues when you go down different paths. Supporting the crd-install hook in Helm v3 would be a significant amount of work at this point and make the code a lot more complex. Codifying processes that aren't backed in the community could be bad if those processes don't take off.

/cc @adamreese @thomastaylor312 @technosophos

technosophos added a commit to technosophos/k8s-helm that referenced this issue Aug 16, 2019
@Morriz
Copy link

Morriz commented May 7, 2020

Let's reiterate. Version management for k8s CRDs is coming in 1.16. Can we work on that and keep whatever helm does now for versions below?

https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/

@moduspwnens
Copy link
Contributor

@stuart-c

Yes other cluster wide resources can be problematic too, but how many charts would depend on something like a ClusterRole as opposed to a CRD?

ClusterRoles are pretty commonly used in charts.
https://github.com/helm/charts/search?q=ClusterRole&unscoped_q=ClusterRole

But I may be misunderstanding your point. I think ClusterRoles are less problematic because they are easier for the chart maintainer to upgrade in a backward-compatible way than CRDs, and there's no risk of unexpected data loss if it's deleted and replaced. Regardless, it's still a cluster-scoped resource that could be modified in a way that's not backward-compatible and break other installs.

Being able to not deploy cluster wide resources would be important for the sue case where the user only has permissions in their own namespace, linked with the opposite case where someone wants to only install the CRDs (admin user asked to do so to allow restricted users to work).

I'd expect the admin to want to install all cluster-scoped resources in that case, right? Not just the CRDs? If s/he's OK with the user installing ClusterRoles but not CRDs, then s/he's just rolling the dice on whether or not a chart update to ClusterRoles breaks others' installs.

The trouble with giving a user a warning are twofold. Firstly how do we even know about the other usages? I might have permissions to my namespace and globally to CRDs, but not to look within other namespaces.

Helm doesn't have to know about the other usages. The warning tells the user that there may be other usages, and forces them to acknowledge it.

I mentioned potentially showing a number of CR instances that would be deleted, and that'd require permissions the user might not have, but that's just for a clearer warning. You can still warn them without that number if you don't have permission.

Secondly if the user is informed of other charts which might break what are the options? Helm couldn't upgrade those other charts (like APT/RPM would), so presumably we are back to either stopping or going ahead regardless.

We are, but now the user knows what they're about to do. S/he isn't deleting a chart without realizing it could impact other installs (or cause CR instances to be deleted, if a CRD is being deleted).

@TBBle
Copy link
Contributor

TBBle commented May 11, 2020

The difference between ClusterRoles (and other cluster-wide resources) and CRDS is that ClusterRoles can have a different name that doesn't affect their behaviour because the things that reference them can also use that different name.

CRDs are special because they are designed to be the interaction point between disconnected systems, so they need to exist at their known names, or they are not useful.

Also, deleting CRDs deletes other things. No other cluster resources do that directly; they may do it by action of an operator or controller, of course.

To contrast with rpm and apt, as has been done earlier, a CRD is like /var/lib/mariadb. Sure, it was created by a package, but if you delete it when you remove the package, you delete the user data too.

If two users define a ClusterRole differently through Helm values, which should win? Wider, narrower, union, intersection, first, last?

Unlike other cluster-wide resources, CRDs have an evolution mechanism and multi-versioning system defined since 1.11.

For CRDs, we know when we can resolve this safely (one will be backwards-compatible to the other) and otherwise we need the administrator to make a choice. Ideally everyone's Helm charts will only ship backwards-compatible CRDs, and so rarely will we need the administrator to be explicit.

For me, the right conceptual behaviour is that Helm only ever silently installs or upgrades CRDs, and it supports explicitly downgrading or force-replacing, and it never deletes them.


Helm's current approach is to not touch CRDs, except that it will install them for you if you don't have them already. Otherwise it's up to the cluster administrators, i.e. the ones who have permission to install CRDs and other Cluster-wide resources, to maintain those CRDs, similarly to how they would maintain them without Helm.

This is a starting point, it's not "mission accomplished" for CRDs. Per #7735 (comment), the lowest-hanging fruit could be an "--overwrite-crds" flag to complement the "--skip-crds" flag which lets you explicitly both trust that your chart producer didn't break their CRDs between releases, and that you are installing the latest release on the cluster, and aren't going to break some other admin's installation of a newer version of the same operator.

My mooted proposal earlier is intended to go further than that, and make the known-safe cases silently correct, and unsafe cases explicit as to why they are unsafe, and allow a resolution.

It's worth including in this discussion that there are operators like https://github.com/jaegertracing/jaeger-operator which are designed to support multi-install, each operator monitoring only their own namespace, but which are going to need to share CRDs. So we can't assume the cert-manager policy of "CRDs are just global resources, and you'll only ever have one cert-manager instance installed on your cluster" applies to all Helm charts.

@lucendio
Copy link

lucendio commented Jun 1, 2020

Hello everyone,

I tried to go through all the discussions to see if the following thought has bee brought up already. I couldn't find any evidence, but I may have overlooked something. If that's the case, please point me into the right direction. Anyway, I'm looking forward to your feedback.

AFAIU, CRDs allow the definition and availability of multiple API versions at the same time, which are identified by a name (spec.versions.*.name). These apiVersions are referred to in every instance of the corresponding custom resource.

UPDATE: the following was already described in a more general way in this very issue description. So, I'm wondering, what is wrong with this approach?

So, every time Helm tries to apply a CRD, It could check whether a API version (spec.versions.*.name) either even existed in the previous release version, or if the schema of a specific API version is still identical to the schema in the previous release version. If it's different, Helm throws. This way, CRD authors can only introduce (breaking) changes by introducing a new API version entry in spec.versions.

@kyounger
Copy link

kyounger commented Jul 3, 2020

Would considering an option like --only-crds make sense, where helm would skip everything except the crds? One of the two recommended approaches in the docs is to install CRDs as a separate chart. This would allow reuse of the chart package for two release installs, 1) as a release for the CRDs at a specific version, and 2) a release of the chart templates at that same version.

I think this would also work well with k8s api versioning. You would just install multiple releases of the chart (using proposed --only-crds) at different versions and you'd have the ability to take advantage of multiple CRD versions.

I readily admit that it doesn't trigger my "elegance" spidey-sense. But typically these CRDs are developed in the same git repo and versioned together anyway. So I think we all agree it makes sense to keep them in the same chart. This would just provide a way to keep that pattern and easily implement recommend docs approach #2.

@TBBle
Copy link
Contributor

TBBle commented Jul 5, 2020

That would probably make more sense as a command, install-crds, I think. Since they're not templated, for example, a lot of the options (like --values) wouldn't make much sense to have available. It'd be specifically-useful in the situation I mentioned above where you want to use a chart that depends on CRDs, but do not yourself have permissions to install CRDs on a cluster, so you can ask the administrator to just install the CRDs for you.

I still intend to try and implement my ideas above (both --force-crds, --overwrite-crds, or similar as the inversion of --skip-crds, and a (plugin, at first) crd-management command (I called it update-crds earlier, it probably overlaps with the install-crds idea) which can attempt to correctly update CRDs by default, and when it cannot, gives the user the right levers to do it, with an eye to building that into Helm's upgrade/install flows.

@monotek
Copy link

monotek commented Jul 6, 2020

How about that workflow:

If "helm upgrade" detects that a CRD would be overwritten, it should fail with something like: "CRD has same version as the one installed or no version at all. Use --force (onyl if you know what you're doing) to overwrite it anyway or --skip-crd (recommended) to leave it untouched."

If "helm upgrade" finds a CRD and the version to be installed has a new / another version it should be installed additionally, keeping the old CRD version in place too.

Maybe also an additional hint on "helm upgrade" could be added that charts should use versioned CRDs only. Even a "helm install" could already warn about usage of unversioned CRDs.

@bacongobbler
Copy link
Member

See helm/community#138 for the explanation why we are not considering that approach.

@github-actions
Copy link

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@KlavsKlavsen
Copy link

If I understand this correctly, Helm project has abandoned trying to support CRD lifecycle.. which means "something else" will have to be written to update CRDs in Helm charts?

@monotek
Copy link

monotek commented Jun 8, 2021

FluxCD can ;-)

https://fluxcd.io/docs/components/helm/helmreleases/#crds

@kvaps
Copy link

kvaps commented Aug 21, 2023

Heya, I just added some workaround plugin to solve this problem:
https://github.com/kvaps/helm-ensurecrds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal Stale v3.x Issues and Pull Requests related to the major version v3
Projects
None yet
Development

No branches or pull requests