Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kustomize not able to reject a resource with name field during replacement #5169

Closed
mdfaizsiddiqui opened this issue May 10, 2023 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@mdfaizsiddiqui
Copy link

mdfaizsiddiqui commented May 10, 2023

What happened?

In my base configuration I have a deployment object which has a http based health check probes. I've a requirement to kustomize this deployment object for one of my app (test-app2) & replace it with tcp health check.

We've a common replacement file shared by all the apps (/apps/common/internal/replacement.yaml), and there we are renaming part of the http health check endpoint with the app name. (Assume that most of our application have http health check only and only one or two app will have this special requirement of having a tcp health check).

In the test-app2 (where we want tcp health checks), we're first adding a patch to add tcp health check and another patch to remove http health check. But as soon as the http health check is removed by the patch, the field is referenced/referred by replacement.yaml and hence we see this error -

➜  kustomize-bug git:(main) kustomize build --load-restrictor LoadRestrictionsNone envs/dev
Error: accumulating resources: accumulation err='accumulating resources from 'apps': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps': accumulating resources: accumulation err='accumulating resources from 'test-app2': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2': accumulating resources: accumulation err='accumulating resources from '../../../../apps/test-app2': read /Users/faizsiddiqui/github/kustomize-bug/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/apps/test-app2': unable to find field "spec.template.spec.containers.[name=main].readinessProbe.httpGet.path" in replacement target

In order to overcome this issue, we saw that in replacement object we can use reject to exclude a resource but somehow it is not working for us and we're still seeing the same above error.

- select:
      kind: Deployment
    reject:
      - name: test-app2
    fieldPaths:
      - spec.template.spec.containers.[name=main].readinessProbe.httpGet.path
      - spec.template.spec.containers.[name=main].livenessProbe.httpGet.path

As you can see that we want replacement to happen for all the deployment object excluding where name is test-app2 (where we want tcp health check), but we're seeing error.

What did you expect to happen?

We want reject section in replacement block to exclude the resource with the given name and only apply the replacement to all the selected resource except those mentioned in reject block.

How can we reproduce it (as minimally and precisely as possible)?

Git Repo - https://github.com/mdfaizsiddiqui/kustomize-bug/tree/main

Execute -

kustomize build --load-restrictor LoadRestrictionsNone envs/dev

Expected output

Expected output should have test-app2 probes to only have tcp health check like -

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: test-app2
  name: test-app2
spec:
  replicas: 1
  selector:
    matchLabels:
      service: test-app2
  template:
    metadata:
      labels:
        app: my_service
        service: test-app2
    spec:
      containers:
      - image: 1234567890.dkr.ecr.us-east-1.amazonaws.com/test-app2:cdfeff-123213
        imagePullPolicy: Always
        livenessProbe:
          initialDelaySeconds: 3
          periodSeconds: 4
          tcpSocket:
            port: 8080
        name: main
        readinessProbe:
          initialDelaySeconds: 3
          periodSeconds: 4
          tcpSocket:
            port: 8080

Actual output

We're seeing error -

➜  kustomize-bug git:(main) kustomize build --load-restrictor LoadRestrictionsNone envs/dev
Error: accumulating resources: accumulation err='accumulating resources from 'apps': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps': accumulating resources: accumulation err='accumulating resources from 'test-app2': read /Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/envs/dev/apps/test-app2': accumulating resources: accumulation err='accumulating resources from '../../../../apps/test-app2': read /Users/faizsiddiqui/github/kustomize-bug/apps/test-app2: is a directory': recursed accumulation of path '/Users/faizsiddiqui/github/kustomize-bug/apps/test-app2': unable to find field "spec.template.spec.containers.[name=main].readinessProbe.httpGet.path" in replacement target

Kustomize version

v5.0.2

Operating system

MacOS

@mdfaizsiddiqui mdfaizsiddiqui added the kind/bug Categorizes issue or PR as related to a bug. label May 10, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 10, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@koba1t
Copy link
Member

koba1t commented May 17, 2023

Hi @mdfaizsiddiqui

Please add create: true option for replacements.

Like below example.

    options:
      delimiter: "/"
      index: 1
      create: true

Your deployments are missing path from fieldPath spec.template.spec.containers.[name=main].readinessProbe.httpGet.path. The replacements can find only until httpGet position.

@mdfaizsiddiqui
Copy link
Author

mdfaizsiddiqui commented May 22, 2023

create: true

This suggestion is doing the opposite of what we're looking for, we don't want httpGet to be the part of final output for test-app2 (check below output). The create: true property is populating the httpGet field which we're actually removing with this patch.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: test-app2
  name: test-app2
spec:
  replicas: 1
  selector:
    matchLabels:
      service: test-app2
  template:
    metadata:
      labels:
        app: my_service
        service: test-app2
    spec:
      containers:
      - image: 1234567890.dkr.ecr.us-east-1.amazonaws.com/test-app2:cdfeff-123213
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /test-app2
          initialDelaySeconds: 3
          periodSeconds: 4
          tcpSocket:
            port: 8080
        name: main
        readinessProbe:
          httpGet:
            path: /test-app2
          initialDelaySeconds: 3
          periodSeconds: 4
          tcpSocket:
            port: 8080

@koba1t
Copy link
Member

koba1t commented May 26, 2023

I understand what you want to do.
I think you prefer to use the Components function for exec replacements.

example

# apps/common/internal/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

replacements:
  - path: replacement.yaml

@natasha41575 natasha41575 added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 2, 2023
@kphunter
Copy link

kphunter commented Nov 16, 2023

I've also run into this issue, and I think it's because the ordering of operations is difficult to ascertain. Do replacements get applied before/after a patch?

For example, if I have a source and want to target a fieldPath in selected resources, IMHO the reject operation should exclude that resource before the fieldPath is considered.

ie. kustomize build shouldn't generate an error when the fieldPath exists for selected resources but doesn't exist for rejected resources.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 16, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

6 participants