Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider using local credentials for helm to support private oci-based helm charts #5407

Closed
natasha41575 opened this issue Oct 20, 2023 · 9 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/under-consideration

Comments

@natasha41575
Copy link
Contributor

When running kustomize build and kustomize localize, we use the local git binary and configuration stored on the users' machine for fetching remote resources. This means that the user can access private git repositories, so long as they have the git configuration & authorization to do so.

When using the helm plugin, however, it looks like we set some global variables during execution that prevents helm from using the users' local credentials. This prevents the user from e.g. using private oci-based helm charts.

#4614 is an example of what it would look like to avoid setting those variables for helm, and enable users to use private oci-based helm charts in kustomize (provided that they run helm registry login on their own machine prior to running kustomize build).

At first glance it seems inconsistent to me to be ok using the users' local credentials for git but not helm. That said, I unfortunately have no context on why we support private kustomize repositories but seem to be actively preventing it for private helm charts. I also have no context on what these extra variables do and if they have any side affects other than preventing private helm charts. If we have more information on the history of these decisions, it will help us understand if it makes sense to reconsider them.

If we do make this change, another step we could consider from there would be to have kustomize localize support pulling down remote helm charts, to ease a workflow where someone is using a git-syncer such as Argo or Config Sync with kustomize + a private oci-based helm chart. The workflow for this use case would be something like:

  1. run helm registry login
  2. run kustomize localize on their kustomization that includes a private oci-based helm chart
  3. push the new localized kustomization directory to a private git repo
  4. configure your git-syncer to pull from your private git repo where you pushed your localized kustomization
@k8s-ci-robot k8s-ci-robot added the needs-kind Indicates a PR lacks a `kind/foo` label and requires one. label Oct 20, 2023
@natasha41575
Copy link
Contributor Author

/kind feature
/triage under-consideration

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. kind/feature Categorizes issue or PR as related to a new feature. triage/under-consideration and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 20, 2023
MrFreezeex added a commit to MrFreezeex/kustomize that referenced this issue Nov 9, 2023
`HELM_CONFIG_HOME` is supposed to contain two files `repositories.yaml`
and `repositories.lock`. Kustomize sets by default
`HELM_CONFIG_HOME` to an empty tmp dir not populated with any of the
`repositories.*` files which prevent Helm from pulling private OCI repo for
instance (even if this repo is not listed in `repositories.yaml`).

This commits remove the default value to a tmpdir. Kustomize will thus
not populate `HELM_CONFIG_HOME`, `HELM_CACHE_HOME` and `HELM_DATA_HOME`
by default anymore. User can still override this directory with
`helmGlobals`. Setting `configHome` in global to the normal
helm config location (`/home/MY_USER_HERE/.config/helm`) could also be
used as a workaround before this commit.

Fixes kubernetes-sigs#5407

Signed-off-by: Arthur Outhenin-Chalandre <[email protected]>
MrFreezeex added a commit to MrFreezeex/kustomize that referenced this issue Nov 9, 2023
`HELM_CONFIG_HOME` is supposed to contain two files `repositories.yaml`
and `repositories.lock`. Kustomize sets by default
`HELM_CONFIG_HOME` to an empty tmp dir not populated with any of the
`repositories.*` files which prevent Helm from pulling private OCI repo for
instance (even if this repo is not listed in `repositories.yaml`).

This commits remove the default value to a tmpdir. Kustomize will thus
not populate `HELM_CONFIG_HOME`, `HELM_CACHE_HOME` and `HELM_DATA_HOME`
by default anymore. User can still override this directory with
`helmGlobals`. Setting `configHome` in global to the normal
helm config location (`/home/MY_USER_HERE/.config/helm`) could also be
used as a workaround before this commit.

Fixes kubernetes-sigs#5407

Signed-off-by: Arthur Outhenin-Chalandre <[email protected]>
MrFreezeex added a commit to MrFreezeex/kustomize that referenced this issue Nov 9, 2023
`HELM_CONFIG_HOME` is supposed to contain two files `repositories.yaml`
and `repositories.lock`. Kustomize sets by default
`HELM_CONFIG_HOME` to an empty tmp dir not populated with any of the
`repositories.*` files which prevent Helm from pulling private OCI repo for
instance (even if this repo is not listed in `repositories.yaml`).

This commits remove the default value to a tmpdir. Kustomize will thus
not populate `HELM_CONFIG_HOME`, `HELM_CACHE_HOME` and `HELM_DATA_HOME`
by default anymore. User can still override this directory with
`helmGlobals`. Setting `configHome` in global to the normal
helm config location (`/home/MY_USER_HERE/.config/helm`) could also be
used as a workaround before this commit.

Related to kubernetes-sigs#5407

Signed-off-by: Arthur Outhenin-Chalandre <[email protected]>
@MrFreezeex
Copy link
Member

MrFreezeex commented Nov 9, 2023

Hi!

I believe this should be more considered as a bug than a feature.

The culprit of this would be HELM_CONFIG_HOME which Kustomize sets by default to a tmpdir. As a workaround users may have a helm wrapper that unset this env var or configure it to it's normal/default value (helm wise) via configHome in helmGlobals. They would have to do something like that:

helmGlobals:
  configHome: "/home/MY_USER_HERE/.config/helm"

There is supposed to be two files in this folder: repositories.yaml and repositories.lock. But even if the private OCI repository is not listed in those files it seems to work as long as those files exists, so I would suspect that if those files doesn't exist helm enter in a code path where it can't figure out the actual docker credential and file for a private OCI repository.

Although setting this environment variable is not motivated in the code, original commit or PR (#3784), so I have no idea why Kustomize sets this in the first place. Kustomize also doesn't really populate those dir so I don't think there is any use to those. My best guess is that the charts were supposed to be pulled in this tmp dir in the first place but as you may know this doesn't work as kustomize (and helm actually) will pull the charts inside the local folder in a charts subfolder.

So to fix the issue at hand I would have multiple propositions:

  1. Do not set the configHome by default to a tmp dir and let the user override it if they really want to. I have a PR that does already this: helm: remove HELM_CONFIG_HOME default tmp value #5434
  2. Try to populate the config dir in the tmp dir with sensible default so that Helm may go to an happier code path that would allow pulling charts from private OCI repos
  3. Make kustomize pull the charts into the tmpdir by default (which might have been the original intent?) and possibly have a different code path when you do kustomize localize to pull the charts in the local folder somehow and try to reuse on build if it already exists. This may be considered as part of a larger change though as it could involve larger changes.

@felixlut
Copy link

felixlut commented Dec 28, 2023

Any updates on this? It is currently a blocker for my org, with a hacky workaround being the script below for the time being (essentially running the helm pull command that kustomize would do, which due to caching allows a proper kustomize build to work as intended). I'd like to dump this for native kustomize support ASAP 😅

for i in $(find -name 'kustomization.yaml' -printf "%h\n"); do
    errormessage="first-run"
    retries=0
    while [ "${errormessage}" != "" ] && [ ${retries} -lt 5 ]
    do
        # Hacky extraction of the helm pull error message (https://unix.stackexchange.com/a/24151)
        errormessage=$(kustomize build --enable-helm "${i}" 2>&1 | sed -n -e 's/^.*unable to run: //p' | cut -d"'" -f 2)
        echo "${errormessage}"
        ${errormessage}
        ((retries++))
    done
done

@MrFreezeex
Copy link
Member

MrFreezeex commented Dec 28, 2023

Any updates on this? It is currently a blocker for my org, with a hacky workaround being the script below for the time being (essentially running the helm pull command that kustomize would do).

It's also a blocker for us :(. As you can see above I have a PR fixing this in a minimalist way possible and also proposed alternative fixes that I would be happy to look at if the PR above is not considered... But yeah waiting for a kustomize reviewer/approver to take a look and validate the PR linked or some alternative approaches.

@felixlut
Copy link

felixlut commented Dec 28, 2023

Any updates on this? It is currently a blocker for my org, with a hacky workaround being the script below for the time being (essentially running the helm pull command that kustomize would do).

It's also a blocker for us :(. As you can see above I have a PR fixing this in a minimalist way possible and also proposed alternative fixes that I would be happy to look at if the PR above is not considered... But yeah waiting for a kubespray reviewer/approver to take a look and validate the PR linked or some alternative approaches.

Like @MrFreezeex mentions above, I see this more as a bug than a feature. IMO, using the local credentials of helm is how it should work by default (or at least it should be possible to override it). I say this both because git utilizes local credentials straight in kustomize, but also because I think this is how most tools work overall. To give a few examples of the top of my head:

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 27, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/under-consideration
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants