Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support composition in kustomize #1251

Closed
apyrgio opened this issue Jun 25, 2019 · 57 comments · Fixed by #2168
Closed

Support composition in kustomize #1251

apyrgio opened this issue Jun 25, 2019 · 57 comments · Fixed by #2168
Projects

Comments

@apyrgio
Copy link
Contributor

apyrgio commented Jun 25, 2019

We (@arrikto) have been using kustomize lately in order to deploy our software on Kubernetes, using a single base and multiple overlays for each deployment site. This is the typical inheritance scenario that kustomize supports.

As the number of deployments grow, however, so do the things that are common between them. For example, consider the following deployments:

  • dev: Consists of configuration logic for Minikube, local storage, image pull secrets, app secrets.
  • test: Consists of configuration logic for GKE, local storage, app secrets.
  • staging: Consists of configuration logic for GKE, S3, app secrets.
  • prod: Consists of configuration logic for GKE, S3, app secrets.

This a composition scenario, where common configuration logic is put into components for reusability. If one wants to achieve it with kustomize, they would probably create intermediate overlays for each component, and multi-base overlays for each deployment, e.g.:

                    +--------------+
                    |              |
       +----------->+  Component1  +----------+
       |            |              |          |
+------+-------+    +--------------+      +---v----+
|              |                          |        |
|  Deployment  |                          |  Base  |
|              |                          |        |
+------+-------+    +--------------+      +---^----+
       |            |              |          |
       +----------->+  Component2  +----------+
                    |              |
                    +--------------+

Unfortunately, this does not work, since kustomize ends up duplicating the YAMLs of the base. We have an example topology here that showcases this error:

(NOTE: You need kustomize v2.1 to run the examples)

$ kustomize build github.com/ioandr/kustomize-example//diamond/overlays/aggregate
`Error: accumulating resources: recursed merging from path '../overlay2': may not add resource with an already registered id: extensions_v1beta1_Deployment|~X|my-deployment`

If we change our components to not point to the base:


                           +--------------+
                           |              |
              +----------->+  Component1  |
              |            |              |
              |            +--------------+
              |
       +------+-------+       +--------+
       |              |       |        |
       |  Deployment  +------>+  Base  |
       |              |       |        |
       +------+-------+       +--------+
              |
              |            +--------------+
              |            |              |
              +----------->+  Component2  |
                           |              |
                           +--------------+

This works for creating new resources, but fails for patches, since patches must refer to resources that are defined in one of their bases. We have an example topology here that showcases this error:

$ kustomize build github.com/ioandr/kustomize-example//mixin/overlays/aggregate
Error: accumulating resources: recursed accumulation of path '../overlay1': failed to find target for patch extensions_v1beta1_Deployment|my-deployment

It seems that currently there's no better way than duplicating code across overlays, which is not maintainable in the long run. Still, we strongly believe that some form of composition support would make kustomize much more usable. This is echoed in other issues too (#759, #942), as well as high-profile projects like Kubeflow, which needed to create a tool on top of kustomize to support composition.

Are there any plans to support composition in kustomize and, if not, does the core team have a better suggestion to tackle this configuration reusability issue?

/cc @monopole @Liujingfang1

@jbrette
Copy link
Contributor

jbrette commented Jun 26, 2019

@apyrgio Please have look at: #1253

There is still a big issue to address (variable pointing at name) but the behavior seems to come along.
When reproducing your setup here, this is the kustomize build output:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: p1-my-deployment
spec:
  template:
    spec:
      containers:
      - image: my-image
        name: my-deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: p2-my-deployment
spec:
  template:
    spec:
      containers:
      - image: my-image
        name: my-deployment

/cc @monopole @Liujingfang1

@alexferl
Copy link

alexferl commented Jun 26, 2019

@jbrette I saw your pull #1253 yesterday and I tried locally but it didn't fix my issue with the setup that you can see here.

The interesting thing is that if you comment out line 6 (- frontend) in base/kustomization.yaml and development/kustomization.yaml you actually get the expected output for the backend deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    env: dev
  name: my-backend
spec:
  replicas: 2
  selector:
    matchLabels:
      env: dev
  template:
    metadata:
      labels:
        env: dev
    spec:
      containers:
      - image: my-backend-image
        name: my-backend

It's really when you add another resource that things go wrong.

@apyrgio
Copy link
Contributor Author

apyrgio commented Jun 26, 2019

Hi @jbrette, thanks for taking the time to look into this issue.

I've checked out your PR and built kustomize from it. With your changes, our diamond example builds correctly. However, when we add patches to the mix, I'm afraid it fails like before. You can see this in our updated diamond example, where the two overlays patch the same base resource:

$ kustomize build github.com/ioandr/kustomize-example//diamond-with-patches/overlays/aggregate
Error: accumulating resources: recursed merging from path '../overlay2': may not add resource with an already registered id: extensions_v1beta1_Deployment|~X|my-deployment

The interesting thing here is that if the same two patches were defined in a single overlay, kustomize would operate as expected. This makes me believe that the error we see here is a side-effect of how kustomize builds YAMLs when the load paths diverge, and not the intended behavior. Is this the case, or am I missing something?

Also, I think that the diamond example in your repo refers to a different use case (see here). We explicitly don't use namePrefix in our overlays, because the point of the composition scenario is to deploy a single app with N patches, not N apps with 1 patch each. So, if you're ok with this distinction, let's experiment on examples with no namePrefix, to make this issue simpler.

@Liujingfang1
Copy link
Contributor

Kustomize expects different overlays of one common base are for different apps. Then in the aggregated overlay, you have multiple apps. Adding nameprefix, namesuffix or namespace in overlays are to make them for different apps. @apyrgio, if you want to keep single app, you can apply patches in the same kustomization, rather than using multiple overalys.

@apyrgio
Copy link
Contributor Author

apyrgio commented Jun 26, 2019

Thanks for the clarifications @Liujingfang1. So, if I understand correctly, multi-base overlays are used to duplicate the resources of a base, either via name prefixes/suffixes or namespaces, to support this use case:

                +------+
                |      |
       +--------> App1 +-------+
       |        |      |       |
+-----------+   +------+   +---v----+
|           |              |        |
|  Project  |              |  Base  |
|           |              |        |
+-----------+   +------+   +---^----+
       |        |      |       |
       +--------> App2 +-------+
                |      |
                +------+

I can see the merits of this use case, but I find it surprising that it can't co-exist with the use cases presented at this issue.

If it helps, I think that the surprising part is that people use kustomize with the expectation that, given a K8s resource ID, they can apply one or more patches to it. They can do so within an overlay, and they'd expect to do so from different overlays as well. In the examples that I and @AdmiralObvious have supplied, this can work, e.g., by patching the same resource sequentially, in the order that the patches were defined.

And yes, one can define patches in a single place and refer to them in various kustomizations, using --load_restrictions none. This is a bit permissive flag though, and it doesn't address the issue of generators, which can only be defined in kustomizations.

If there are no serious semantic issues, I'd urge you to reconsider this limitation. I believe it will be helpful for a lot of kustomize users, without breaking existing use cases. If there are though, can you please explain for posterity what they are?

@jbrette
Copy link
Contributor

jbrette commented Jun 26, 2019

@AdmiralObvious @apyrgio Thanks for the additional examples you provided today. I realigned the examples with what you provided:

  1. We removed the prefixes from diamond example, and went back to original example here and it seems to work (with the PR). But we agree that as long as we don't try something like your diamond-withpatch nor mixin it is not really useful. Will attempt to address it later.

  2. It took @AdmiralObvious examples and changed a little the kustomization.yaml in both development and production. The output seems to be the expected ones (also with the PR) : here.

Please double check. The second set of examples is showing that composition can work to some extend but that structure of the folders and kustomization.yaml is crucial.

@paveq
Copy link

paveq commented Jun 27, 2019

It would be super-useful to support mixins / composition.

As example, we currently have structure set up like this:

                                                                 +--------------+
                                                              +--+App Deployment|
                                                              |  +--------------+
                                                              |
                                                              |  +------------+
                                                              +--+MySQL       |
                                                              |  +------------+
                         +----------+                         |
                     +---+app|prod  +<---+                    |  +--------------+
                     |   +----------+    |      +--------+    +--+Redis|cache   |
+--------------+     |                   +------+App base+<---+  +--------------+
|client|A|prod +<----+                   |      +--------+    |
+--------------+         +----------+    |                    |  +--------------+
                     +---+app|dev   +<---+                    +--+Redis|session |
+--------------+     |   +----------+                         |  +--------------+
|client|B|dev  +<----+                                        |
+--------------+     |                                        |  +--------------+
                     |                                        +--+Varnish       |
+--------------+     |                                        |  +--------------+
|client|A|dev  +<----+                                        |
+--------------+                                              |  +--------------+
                                                              |  |Persistence   |
                                                              +--+Layer         |
                                                                 +--------------+

  • Each client has their own final overlay with production and development instances. These overlays contain minimal configuration, basically only kustomization.yaml
  • Resource requests and limits are set on next level, in "app-prod" and "app-dev"
  • "App-base" just collects sub-components and ties them together
  • In bottom layer we have components for application itself and it's dependencies such as MySQL Varnish and Redises
  • Persistence layer has PVC's and patches for MySQL & Redis-session to persist data. In theory without this everything can be deployed in ephemeral fashion (emptyDir).

Considering the architecture, I would prefer to move some of components to higher level, and mix-and-match different components (eg. use different kind of MySQL setup, or not use persistence layer each time). But deployment requires variable reference to MySQL hostname, and references don't work unless MySQL is directly under "app-base". With only inheritance approach, despite it looking modular it's still very difficult to do any changes that should not be applied to every instance, unless entire tree is being split.

Persistence layer also contains patches that need to be applied on "app-base", due to them applying to sibling components.

@alexferl
Copy link

alexferl commented Jun 27, 2019

@jbrette yes, your modifications on my example make it work, even on kustomize 2.1.0. Thank you! Indeed, getting the the right folder/kustomization.yaml structure is crucial and tricky in my opinion. I tried several permutations myself before coming here for help.

One thing I can see that might be less desirable with that method is how long the resources list could get in base/kustomization.yaml. We have tens of apps and each with from 2 to 10+ yaml files so the list will get long pretty quickly.

@jbrette
Copy link
Contributor

jbrette commented Jun 27, 2019

@paveq Even so I still working on getting the diamond "import" of files, you don't need it absolutely to achieve a composition and mixin: Have a look at mixin example

Each component comes configured with default ports. Those values are updated according to prod and dev.

/cc @AdmiralObvious @apyrgio

@monopole
Copy link
Contributor

monopole commented Jun 28, 2019

@paveq, @apyrgio Love the ascii art above!

@jbrette @ioandr Also, appreciate the examples

It would be immensely helpful if someone wrote a test-only PR showing the inputs, the current output, and some commentary about what the desired output should be.

We can then merge it, and when the bug / feature request is fixed, one can edit the test to prove it. See #1279 which updates a test entered in #978

As a start, someone could copy/paste this: pkg/target/diamonds_test.go

The inputs are in YAML, as is the output. The case covered is

//         dev      prod
//             \   /
//            tenants
//          /    |    \
//      kirk   spock  bones
//          \    |    /
//             base

with different examples of patch addressing as you go up the stack.

So - kustomize supports mixins (compositions) in the sense that any kustomization can combine other kustomizations. At the end of the day all resources need unique names. If you send a stream of YAML to k8s with a namesspace-GVKN defined twice, the last one will overwrite the first (maybe you want this, maybe you don't).

One could add another layer on top of the above example combining dev and prod, if one wanted to deploy them in one kustomize build . | kubectl apply -f - command (i.e. send dev and prod to the same cluster).

But for that to work, the kustomizations at dev or prod should add either a namespace, namePrefix or nameSuffix directive to keep dev and prod objects distinct.

Anyway, please lets express more of these desires as tests.

@paveq
Copy link

paveq commented Jun 28, 2019

@monopole One thing that would help composition would be overlay Kustomization to be able to define prefix or suffix for base they are using, and be able to request same base multiple times. For example, in my case I'm using Redis two times, but I currently I have to have them in two separate bases as I need different name for them.

As example: syntax could be:
kind: Kustomization

resources:
- components/redis
  params:
    nameSuffix: "-cache"
- components/redis
  params:
    nameSuffix: "-session"

With this syntax, it might be possible to pass some other paramaters (eg. pass variables to base?) to base layer in future as well.

I'll attempt build example input I would like to see working, and the result I would like to see.

@apyrgio
Copy link
Contributor Author

apyrgio commented Jun 28, 2019

@jbrette: Nice job adapting @paveq's use case into a kustomization example.

I checked out the files and have some minor questions:

  1. What is the kind: Values CRD in the example? Is it a CRD that is solely used to hold arbitrary values?
  2. The dev/prod overlays define some Values resources that are not used by anyone. Are they on purpose there?

Also, one final observation, but @paveq can also weigh in on this. I think his last issue was that moving logic higher up on the hierarchy (in this case, making the persistence layer opt-in) cannot be combined easily with other overlays, when both patch the same resources. Your example side-steps this issue because it does things linearly, but slightly more complex examples would hit this limitation. So, if I understand correctly, your example partially solves @paveq's use case, right? In order to be fully solved, we need some form of composition support higher than the base.

@jbrette
Copy link
Contributor

jbrette commented Jun 28, 2019

@monopole Added a go version of the mixin PR Commit

@apyrgio @paveq @AdmiralObvious @ian-howell

  1. The CRD concept is really important.
    a. You can have any CRD you want. They acts as catalog to share data across construct.
    b. The variable acts as reference/pointer to other object. Hence it is very similar to have you can do when you compose complex structure in C++, Java or Go. Kustomize support inlining of entire trees. See PR and Example
  2. The values in the final clients are not used. Kind of confusing. Forgot to remove them.
  3. If you combine the autovar, the inlining of tree and the fact that Values are CRD...you can apply standard kustomize patches at the app/dev and app/prod level on those Values CRD and you will have your optin/optout feature. Will update the example later.

@apyrgio
Copy link
Contributor Author

apyrgio commented Jun 28, 2019

As per @monopole's suggestion, we created a PR that shows the current kustomize limitation, with an example that mimics the diamond test that @monopole quoted.

We tried to keep it as simple as possible, but we hope that it adequately describes the functionality that the people interested in this issue want to see in kustomize.

@monopole
Copy link
Contributor

@apyrgio that's great. lets get that in as an example to point to.

can you comment on why you don't want to do this as stacked overlays?

@monopole
Copy link
Contributor

@paveq,

For example, in my case I'm using Redis two times, but I currently I have to have them in two separate bases as I need different name for them.

i think you can use one base, and two overlays that change the name, e.g.
https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/target/diamonds_test.go
but with name suffix instead of name prefix.

@apyrgio
Copy link
Contributor Author

apyrgio commented Jun 29, 2019

@monopole You mean why we can't convert the example from this:

     composite
   /     |     \
probe   dns  restart
   \     |     /
       base

to this?

composite
    |
  probe
    |
   dns
    |
 restart
    |
  base

There are two reasons. First, you can think of the example as a simplified version of:

    prod    dev
   /   \  /    \
probe   dns  restart
   \     |     /
       base

The base layer contains the vanilla app, the middle layer contains some modifications you can apply to it, and the upper layer contains the deployments of the app, each with a different choice of modifications. This example cannot be built with kustomize, if the probe, dns and restart overlays patch the same base. Also, it can't be converted to two stacks, without copying things around (here the dns overlay, in larger projects, a lot of overlays).

The second reason is semantics. If you've isolated some common knobs of an app into components/overlays, how can you enforce a dependency order on them? Why should the probe overlay depend on the dns overlay, given that they are unrelated? What if a second deployment wants a different order?

@jbrette
Copy link
Contributor

jbrette commented Jul 1, 2019

@paveq @monopole @apyrgio @AdmiralObvious @ioandr

Please have a look at the new version of the PR. I think we got it. Your new test is now returning the proper values: check here

@ioandr
Copy link
Contributor

ioandr commented Jul 1, 2019

Hi @jbrette,

thanks for your focus on this, we really appreciate your efforts. We believe this is getting really close to the desired result, however we would like to point out certain things.

We recently contributed another PR (#1290) that makes the diamond composition test case slightly harder to solve. With this change, unfortunately, your PR fails to produce the expected output. Here, the reason is that converting a patched resource into a patch itself can re-introduce a changed field.

We’d go one step further though, and argue that any type of merging that happens after the patches have been applied is inherently ambiguous, unless you recreate the patches somehow. A simpler approach may be to traverse the overlays depth-first and make patches that target the same GVKN apply to the same resource, thereby side-stepping any conflicts.

@monopole We would love to hear your opinion on this, and specifically whether this approach is in the right direction.

@monopole
Copy link
Contributor

monopole commented Jul 1, 2019

OK, now that we've got a load of tests we're
making progress.

Would it be reasonable to retitle this issue as:

Support patch reuse in kustomize.

Two ways to do so:

Run kustomize build with --load_restrictor none.

This lets paths in the patchesStrategicMerge
field reach outside the kustomization root,
meaning one can put patch files in some common
on-disk location, and share them between other
kustomizations.

Downsides:

  • Your kustomization is no longer relocatable,
    addressible via a git URL, and git clonable.
    It's no longer self-contained.

  • All load restrictions are removed, opening the
    user to whatever threat the load restrictor
    was meant to address.

See TestIssue1251_Patches_ProdVsDev

Use the new transformers field (use plugins).

Like the patchesStrategicMerge field, this
lets one reference locally defined transformer
plugin configs, but the transformers field
also lets one reference remote kustomizations
(just like the resources field).

So define three transformer configs representing
the three patches, give each one their own
kustomization file (making them relocatable),
then reference these kustomizations in the
transformers field.

See TestIssue1251_Plugins_ProdVsDev

This test uses the builtin
PatchJson6902Transformer plugin, with three
different instances (configured to add the probe,
add the dns policy, add the restart policy).

Plugins were introduced to both as an extension
point, and as a means to do this kind of modular
reuse.

Plugins are alpha, so the exact mechanics /
notation for using them will evolve as we all
figure out the best way to use them. But the concept
is built into the core of kustomize now - all "native"
operations use a builtin plugin now.

@monopole
Copy link
Contributor

monopole commented Jul 1, 2019

forgot to mention that the tests referred to in the previous comment are in
https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/target/diamondcomposition_test.go

They have ProdVsDev in their names because they do the example you described above:

    prod    dev
   /   \  /    \
probe   dns  restart
   \     |     /
       base

Thanks for these specific examples.

@apyrgio
Copy link
Contributor Author

apyrgio commented Jul 1, 2019

Thank you very much for the prompt reply and the abundance of tests. We tried to dig through the suggested solution and we have some comments below. We don't have much experience with transformers, so please correct us if we've understood something wrong.

As we've mentioned in a previous exchange with @Liujingfang1, we don't see composition as patch reuse. We need to reuse kustomizations as well, because they allow us to do the following in a single component:

  1. Specify generators for maps and secrets.
  2. Add new resources.
  3. Alter component resources using builtin or custom transformers.
  4. Define vars

From what we gather, specifying individual transformers is an interesting way to side-step the load restrictions limitation but, unfortunately, they do not cover the above points. For instance, a developer can't define a transformer that will create a configMap and use it on a Deployment. And that's a bummer, because all these things are currently expressed with overlays, and we are so close to making them work.

As for the title, we're definitely not fixated on it. It's just that we wanted an umbrella term for the act of putting everything kustomize supports into a box and using it from various places. If patch reuse was our sole concern, we'd probably be ok with --load_restrictions none for a while.

Does our end goal make more sense now? If so, feel free to change the title to whatever seems more fit. Also, we'd be more than happy to update the test scenario with an overlay that uses a configMapGenerator, but it would unfortunately break some of the new test cases.

@monopole
Copy link
Contributor

monopole commented Jul 1, 2019

TestIssue1251_Plugins_ProdVsDev solves the specific problem (prod, dev, probe, etc). This specific problem was a patch reuse problem, solved with three re-usable kustomizations containing a single jsonpatch transformer. I'd do it that way, and not use the --load_restrictions none method. I was just curious how that approach would look, so i wrote it up and it gave the same answer as the transformers approach. Either way gets the answer you wanted (DNS and Prober in one deployment, DNS and restart in the other, using a common base).

For instance, a developer can't define a transformer that will create a configMap and use it on a Deployment. And that's a bummer,

This test has many examples generated configmaps working with deployments. It's a core feature. Though we'd say ConfigMaps are generated by generators, not transformers, and I'm not sure if you did a typo or meant to use the word transformer.

There are also many examples of kustomization file reuse, as that's the whole point of allowing kustomizations to be specified in the resources: field. If Alice makes a kustomization to deploy minecraft, Bob can use it as a base and adapt it to his needs. But i don't want to have a hand-wavy debate.

we'd be more than happy to update the test scenario with an overlay that uses a configMapGenerator, but it would unfortunately break some of the new test cases.

Yes! Please do; that's ideal.

A passing test that exhibits undesirable behavior is the best way to point out the need for (and get) the support, or alternatively, to have someone point out an alternative approach that doesn't require a new feature.

Composition is a good idea, which is why kustomization supports it. But the devils are in the details, and Bob's view of how composition is achieved might not match Alices'.

You might want to chime in on #1292. It's related, and would be another way to solve this that doesn't work yet. I'd say @jbrette was working in this direction, but it needs a wider scope and some careful thought about backward compatibility.

@apyrgio
Copy link
Contributor Author

apyrgio commented Jul 2, 2019

TestIssue1251_Plugins_ProdVsDev solves the specific problem (prod, dev, probe, etc).

This test has many examples generated configmaps working with deployments. It's a core feature. Though we'd say ConfigMaps are generated by generators, not transformers, and I'm not sure if you did a typo or meant to use the word transformer.

Ah, I see now the source of the confusion. I thought you proposed custom transformers as an alternative to the components we are describing in this issue, which is why I pointed out that transformers don't do all the neat things that overlays do. But you suggested them as a solution to the patch reuse problem specifically, which flew over my head.

Yes! Please do; that's ideal.

We have opened another PR with a new test to further clarify our needs and preserve existing test cases intact. Feel free to comment and review.

You might want to chime in on #1292. It's related, and would be another way to solve this that doesn't work yet. I'd say @jbrette was working in this direction, but it needs a wider scope and some careful thought about backward compatibility.

The proposed merge strategy is a solution to the resource ID conflict that we describe at the start of this issue, right?

That's excellent news! This functionality would be a great addition to kustomize. We'll further comment on #1292 for any implementation details. Also, I'd propose keeping this issue open for a while, so kustomize users in need for components/mixins can comment on whether the approach described in #1292 covers their case.

@steinybot
Copy link

steinybot commented Mar 26, 2020

Would it also be possible to solve this by changing the way that bases are loaded?

My naive understanding was that bases worked the same way as an "include" such that:

overlays/dev/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: foo-dev
bases:
  - ../../base
  - ../common/node-port
commonLabels:
  variant: dev

base/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - api-service.yaml

overlays/common/node-port/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - foo-service.yaml
patchesJson6902:
  - target:
      version: v1
      kind: Service
      name: api
    path: node-port.yaml

Was equivalent to:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: foo-dev
resources:
  - api-service.yaml
  - foo-service.yaml
patchesJson6902:
  - target:
      version: v1
      kind: Service
      name: api
    path: node-port.yaml
commonLabels:
  variant: dev

This doesn't work right now and fails with:

Error: accumulating resources: recursed accumulation of path '/Users/jason/source/bug-reports/overlays/common/node-port': no matches for OriginalId ~G_v1_Service|~X|api; no matches for CurrentId ~G_v1_Service|~X|api; failed to find unique target for patch ~G_v1_Service|api

It seems to that if bases were loaded in order and could reference resources loaded before them then that would be enough to support composition.

I have a more complete example of what I was trying to do here https://github.com/steinybot/bug-reports/tree/kustomize/patch-service.

@apyrgio
Copy link
Contributor Author

apyrgio commented Mar 30, 2020

Right, instructing kustomize to load the resources in order is the gist of @pgpx 's PR. For example in your case, you should be able to convert the overlays/common/node-port overlay to kind: KustomizationPatch, and it should work.

@steinybot
Copy link

@apyrgio Is there a reason not to make this new load order for KustomizationPatch the default for Kustomization?

It feels a bit off to me that the kind dictates the load order within the thing that is loading it. This will limit the ways in which things can be composed in that the component has to be aware of the fact that it may be composed. The concern of loading is backwards.

Will I be able to change everything to KustomizationPatch? In other words can I use KustomizationPatch as the kind of the top level kustomization.yaml file? If not then this means that it can't be composed into a larger set of resources.

@apyrgio
Copy link
Contributor Author

apyrgio commented Mar 31, 2020

Ah, I see what you mean. Ok, I'll try to take your points one by one and comment on them with my understanding and the authoritative responses we have for now:


Is there a reason not to make this new load order for KustomizationPatch the default for Kustomization?

So, you ask why the resources in this scenario:

resources:
  - ../base
  - overlay # Patches base but does not point to it directly

are handled in parallel (so overlay cannot patch a resource in base) and not in order (which would work and is what KustomizationPatch ends up doing).

It seems that Kustomize has this supported use case, whereby a single base ends up being copied multiple times, with a slightly different namePrefix, nameSuffix or namespace. In order to achieve this, it must handle the resources in parallel by default.

The intention is to deploy multiple variants of the same application with a single command. Personally, I have reservations on whether this is a good pattern, or something that is wildly used. In any case, this is further reinforced by @Liujingfang1's comment:

Kustomize expects different overlays of one common base are for different apps. Then in the aggregated overlay, you have multiple apps. Adding nameprefix, namesuffix or namespace in overlays are to make them for different apps.

So, in a nutshell, Kustomize must keep the default load order to maintain backwards compatibility.


It feels a bit off to me that the kind dictates the load order within the thing that is loading it. This will limit the ways in which things can be composed in that the component has to be aware of the fact that it may be composed. The concern of loading is backwards.

This is a valid concern. In fact, @monopole had proposed roughly what you're saying, that the aggregated overlay should specify how the resources should be merged (see #1292 (comment)). I liked this approach but, unfortunately, it never took off. Still, this is something that we could revisit.

Still, if you think about it, the problematic overlay in the example I mentioned above:

  resources:
    - ../base
->  - overlay # Patches base but does not point to it directly

attempts to patch a resource it has never defined. So, it does know that it's a component of sorts and expects that it will be composed. We could even generalize this by saying that any overlay that transforms something it has not defined itself, is aware that it's a component and must be composed.

Now, there are components that don't transform resources but create/generate them instead. In this case, you're right, they can be used both as components and as base overlays, so specifying a single kind is limiting. I don't have an answer to that I'm afraid. My gut feeling though is that this is a situation that will rarely, if ever, be encountered in actual cases.


Will I be able to change everything to KustomizationPatch? In other words can I use KustomizationPatch as the kind of the top level kustomization.yaml file? If not then this means that it can't be composed into a larger set of resources.

That's a very interesting observation and I think it's something @pgpx should have in mind before their PR is merged. If KustomizationPatch simply dictates that an overlay should be loaded in order, not in parallel, then yes, you should be able to use it anywhere on the stack, even on bases and top-level overlays. If this is not the case, then I'd consider this as an implementation limitation that should be fixable.


Sorry for the long comment but, as you see, there's a bit of a backstory to the points you've raised.

Looking back, this is an issue that is over 9 months old, has a steady influx of comments and upvotes, yet no engagement from the core team for the past four months. That's a pity because, while it's understandable that there are other top-level priorities, this issue is the second most upvoted and the second most commented ever in this repo.

It would help the Kustomize users that stumble onto these limitations, and then on this issue, if there was either an active discussion on how to improve the situation, or a mention of a roadmap item, or a design document, or something actionable in general. Else, the feeling they get is that "this seems like a non-issue for the Kustomize team, I guess I'll have to copy-paste stuff or use another tool".

/cc @monopole @Liujingfang1 @pwittrock

@pgpx
Copy link
Contributor

pgpx commented Mar 31, 2020

@apyrgio Good answer - thanks! Regarding using KustomizationPatch at the top-level - it does then just work as a normal Kustomization, and there is even a test case for it (TestApplyingKustomizationPatchDirectlySameAsKustomization).

The basic idea is that a KustomizationPatch is aware/can use any resources defined before it in its parent's list of resources (if any). This would be the same as defining those resources directly in the KustomizationPatch and removing them from its parent, though those resources would then be fixed.

And I agree that any kind of feedback from the Kustomize team would be helpful, even if just to say that they are very busy / have real-world issues.

@pwittrock
Copy link
Contributor

@apyrgio catching up on this issue, will try to respond within a week.

@apyrgio
Copy link
Contributor Author

apyrgio commented Apr 10, 2020

@pwittrock that's exciting, thanks for taking a look at this issue :-). BTW, I took the liberty of putting a discussion topic for this feature on sig-cli's agenda, so that we can answer questions more easily. I'm not sure if sig-cli is the proper place for community meetings regarding Kustomize so, if there's another one, please point me to it and I'll move it there.

@pwittrock
Copy link
Contributor

@apyrgio Thanks. That's the right thing to do.

@pgpx
Copy link
Contributor

pgpx commented Apr 28, 2020

@apyrgio and @pwittrock thanks for covering this in the sig-cli call - I'm sorry that I missed that. if you are going to create a new documentation-centric PR would you be able to link to it here - I'd like to help if needed? Also, if the Kustomize internals are currently being refactored than it might be worth looking at the pull request implementation because the required changes are minimal (though that would also mean that they could be made later too).

@apyrgio
Copy link
Contributor Author

apyrgio commented Apr 28, 2020

@pgpx: Great that you are in sync as well. For the others: I haven't gotten around to this yet, but I plan to summarize what was discussed in the call. As for the doc-centric PR, yes, once I have something ready, I'll send here as well.

@apyrgio
Copy link
Contributor Author

apyrgio commented May 4, 2020

As I mentioned before, there was a sig-cli meeting where this issue was discussed. You can find the recording here (relevant bits after the 29:00 mark).

The tl;dw is this:

  • Kustomize undergoes some very important structural changes, spearheaded by @monopole, so this issue is on the backlog.
  • For certain cases, this issue can be framed as a package management problem. In these cases, people can check out kpt.
  • To make it easier to provide a Kustomize-only solution to this problem, since this issue has received a lot of comments, it may help zooming into this from a user's endpoint, i.e., assuming that Kustomize had the necessary functionality, how they would use it.
  • At least initially, it makes sense to guard this feature behind a feature flag, and see how it matures.
  • The live discussion format is helpful, so we will follow up on the next sig-cli meeting.

So, given the above, the actionable items are these:

  • Draft a documentation PR, where we'll pretend that the functionality exists and it will showcase what the user can do. I'll try to do so before the sig-cli meeting, and I will share it here as well.
  • Follow up in the next Zoom meeting on 2020-05-06 (see the sig-cli agenda for more info).

@apyrgio
Copy link
Contributor Author

apyrgio commented May 5, 2020

I've sent a PR (#2438) with a user story, where it shows a situation that cannot be solved by overlays in a proper manner (i.e., without copy-pasting stuff), but can be solved with components. Hopefully it's simple enough so that newcomers can understand its value. If you feel there's an aspect of this issue that is not covered by this example, or anything else that can be improved, please chime in on #2438.

@monopole monopole added this to Under consideration in kanban May 13, 2020
@apyrgio
Copy link
Contributor Author

apyrgio commented May 18, 2020

Update for those interested on this feature and who have not followed PR #2438; the main user story regarding composition has been ACKed by @pwittrock and @monopole. You can read more about what it entails in #2438 (rendered example here), and more formally on a KEP that will be sent shortly, and will be linked here as well. Any comments on the example or the KEP will help a lot.

apyrgio added a commit to arrikto/kubernetes-enhancements that referenced this issue May 20, 2020
This enhancement proposal introduces components, a new kind of
Kustomization that allows users to define reusable kustomizations.

Main discussion: kubernetes-sigs/kustomize#1251
User story that spawned this KEP: kubernetes-sigs/kustomize#2438
apyrgio added a commit to arrikto/kubernetes-enhancements that referenced this issue May 20, 2020
This enhancement proposal introduces components, a new kind of
Kustomization that allows users to define reusable kustomizations.

Main discussion: kubernetes-sigs/kustomize#1251
User story that spawned this KEP: kubernetes-sigs/kustomize#2438
@apyrgio
Copy link
Contributor Author

apyrgio commented May 20, 2020

We (me, @ioandr and @pgpx) have created a KEP for this issue: kubernetes/enhancements#1803 (rendered document here). In this KEP, we try to explain roughly the following:

  • What is the model that Kustomize supports now? (inheritance model)
  • What are some examples that don't fit to this model?
  • How could we make these examples easier to write? (composition model)
  • Why can't we do so right now? (we explain the issue with Kustomize's handling of multiple sibling overlays)
  • What do we propose? (add a Component kustomization kind)

For those interested in this feature, please chime in on the KEP.

@ringerc
Copy link

ringerc commented Sep 20, 2021

This remains an issue for non-namespaced objects such as CustomResourceDefinition (CRD)s referenced from multiple deployments.

It means that if a and b both depend on some CRD, this:

kustomize build a | kubectl apply -f
kustomize build b | kubectl apply -f

... works fine, but

kustomize build uses_a_and_b_as_based | kubectl apply -f

will fail.

To work around this, it's necessary to modify both a and b to omit the CRDs definitions, but then they do not define all the resources they depend on and cannot be deployed independently. Instead of kustomize build a you now need another layer.

Components do help, but for simple cases like CRDs, this is something Kustomize itself could greatly help the user out with. Especially since the CRDs may be embedded in upstream sources where the user cannot necessarily factor them out into Kustomize components.

For example, kube-prometheus bundles its CRDs in a large flattened blob of kube yaml configuration. It isn't feasible to extract it to accomodate the Kustomize component model.

The simplest option here would seem to be to ignore duplicate resources where the key matches and the contents are the same.

Sample error:

Error: accumulating resources: accumulation err='accumulating resources from 'base/kube-prometheus': '...../base/kube-prometheus' must resolve to a file': recursed merging from path '...../base/kube-prometheus': may not add resource with an already registered id: apiextensions.k8s.io_v1_CustomResourceDefinition|~X|alertmanagerconfigs.monitoring.coreos.com

@ringerc
Copy link

ringerc commented Sep 22, 2021

Here's a demo.

TL;DR: You can't write a single Kustomization that can be used standalone or further composed because of this behaviour. Either your standalone base cannot be applied due to missing CRDs, with errors like:

Error from server (NotFound): error when creating "STDIN": the server could not find the requested resource (post democrds.demo.example.com)

or if you include the CRDs in each base your combined deployment cannot be generated due to kustomize errors like

Error: accumulating resources: accumulation err='accumulating resources from '../crd_b': '/home/craig/.nosnapshot/craig_tmp/demo/crd_b' must resolve to a file': recursed merging from path '/home/craig/.nosnapshot/craig_tmp/demo/crd_b': may not add resource with an already registered id: apiextensions.k8s.io_v1_CustomResourceDefinition|~X|democrds.demo.example.com

It shows that it's not possible to define a single kustomization such that it can stand alone for deployment to a cluster and can be composed into a deployment containing another kustomization that depends on the same CRD. You have to instead split out the CRDs, and build another layer that composes "kustomizations without CRDs" and "just the CRDs" in various combinatorial ways depending on what you wish to deploy. This is painful for maintenance, but also breaks otherwise-logical compositions.

Crucially, generating and applying each piece of the Kustomization individually produces a sensible and correct result, even though Kustomize refuses to generate a kustomization that uses only those two as bases. And you can even concatenate the kustomize output of both parts, then kubectl apply that in a single pass, and get a correct result.

Kustomize should really be de-dup'ing identical objects here.

Components can help here, but only if you control all the definitions. Common real world deployments like kube-prometheus bundle their CRDs, behaving exactly like the a_crd and b_crd in the demo below.

The demo defines the following bases:

  • crd - contains the crd definition only
  • a - contains resource "a" using the crd only
  • b - contains resource "a" using the crd only

and composes them in a few ways:

  • crd_a uses crd and a as bases, so it can be deployed standalone to deploy "a"
  • crd_b uses crd and b as bases, so it can be deployed standalone to deploy "b"
  • crd_b_crd_a uses crd_a and crd_b as bases. It should be able to deploy both "a" and "b" at the same time but due to this issue it cannot be
  • crd_a_b uses crd, a, and b as bases, so it only includes the CRD definition once, and can be deployed.

What convinces me that Kustomize's behaviour is wrong here is that you can sequentially deploy crd_a and crd_b successfully:

$ kustomize build crd_a | kubectl apply -f -
namespace/a created
customresourcedefinition.apiextensions.k8s.io/democrds.demo.example.com created
democrd.demo.example.com/democrd-a created
$ kustomize build crd_b | kubectl apply -f -
namespace/b created
customresourcedefinition.apiextensions.k8s.io/democrds.demo.example.com unchanged
democrd.demo.example.com/democrd-b created

In fact, you can even concatenate the generated configurations and deploy them (the --validate=false is just because concatenating the output confuses kubectl a little):

$ cat <(kustomize build crd_a) <(kustomize build crd_b) | kubectl apply -f - --validate=false
namespace/a created
customresourcedefinition.apiextensions.k8s.io/democrds.demo.example.com created
namespace/b created
customresourcedefinition.apiextensions.k8s.io/democrds.demo.example.com unchanged
democrd.demo.example.com/democrd-b created

kustomize is fine with you generating and applying the CRD then both bases that use it:

$ kustomize build crd_a_b | kubectl apply -f -
namespace/a unchanged
namespace/b unchanged
customresourcedefinition.apiextensions.k8s.io/democrds.demo.example.com unchanged
democrd.demo.example.com/democrd-a unchanged
democrd.demo.example.com/democrd-b unchanged

but not composing two bases that are complete including their dependencies:

$ kustomize build crd_a_crd_b | kubectl apply -f -
Error: accumulating resources: accumulation err='accumulating resources from '../crd_b': '/home/craig/.nosnapshot/craig_tmp/demo/crd_b' must resolve to a file': recursed merging from path '/home/craig/.nosnapshot/craig_tmp/demo/crd_b': may not add resource with an already registered id: apiextensions.k8s.io_v1_CustomResourceDefinition|~X|democrds.demo.example.com
error: no objects passed to apply

However, the pieces that kustomize will let you compose successfully are invalid when deployed standalone:

$ kustomize build a | kubectl apply -f -
namespace/a created
Error from server (NotFound): error when creating "STDIN": the server could not find the requested resource (post democrds.demo.example.com)

so you can't write a single Kustomization that can be used standalone or further composed.

Generate the demo with the following

mkdir -p kustomization.yaml
cat > ./crd/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - "crd.yaml"
__END__
mkdir -p crd.yaml
cat > ./crd/crd.yaml <<'__END__'
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: democrds.demo.example.com
spec:
  group: demo.example.com
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                field:
                  type: string
  scope: Namespaced
  names:
    singular: democrd
    plural: democrds
    kind: DemoCrd

__END__
mkdir -p a_crd.yaml
cat > ./a/a_crd.yaml <<'__END__'
apiVersion: "demo.example.com/v1"
kind: DemoCrd
metadata:
  name: democrd-a
spec:
  field: "a"
__END__
mkdir -p kustomization.yaml
cat > ./a/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: a
resources:
  - namespace.yaml
  - a_crd.yaml
__END__
mkdir -p namespace.yaml
cat > ./a/namespace.yaml <<'__END__'
apiVersion: v1
kind: Namespace
metadata:
  name: a
__END__
mkdir -p b_crd.yaml
cat > ./b/b_crd.yaml <<'__END__'
apiVersion: "demo.example.com/v1"
kind: DemoCrd
metadata:
  name: democrd-b
spec:
  field: "b"
__END__
mkdir -p kustomization.yaml
cat > ./b/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: b
resources:
  - namespace.yaml
  - b_crd.yaml
__END__
mkdir -p namespace.yaml
cat > ./b/namespace.yaml <<'__END__'
apiVersion: v1
kind: Namespace
metadata:
  name: b
__END__
mkdir -p kustomization.yaml
cat > ./crd_a_b/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../crd
  - ../a
  - ../b
__END__
mkdir -p kustomization.yaml
cat > ./crd_a/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../crd
  - ../a
__END__
mkdir -p kustomization.yaml
cat > ./crd_b/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../crd
  - ../b
__END__
mkdir -p kustomization.yaml
cat > ./crd_a_crd_b/kustomization.yaml <<'__END__'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
        - ../crd_a
        - ../crd_b
__END__

@pkit
Copy link

pkit commented Oct 14, 2021

TL;DR: You can't write a single Kustomization that can be used standalone or further composed because of this behaviour.

Yup, it's totally unusable for anything other than one deployment.
Event doing prod + staging leads to full replication of everything, otherwise it's either unpatcheable or duplicated.
I found one hack though: use GitOps (flux2) to deploy things in "stages" and thus they are not really dependent on each other outside of flux itself. But any new resource may need a full rearrangement or even creating a new separate "stage".

@ringerc
Copy link

ringerc commented Oct 19, 2021

@pkit Exactly - and kustomize is already a wrapper tool, it's not something you really want to need a wrapper-wrapper for...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
kanban
  
Done
Development

Successfully merging a pull request may close this issue.