Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Application using Kustomize with Helm cannot access private remote helm repo #10745

Open
sandoichi opened this issue Sep 29, 2022 · 8 comments
Open
Labels
enhancement New feature or request

Comments

@sandoichi
Copy link

Checklist:

  • [:heavy_check_mark:] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • [:heavy_check_mark:] I've included steps to reproduce the bug.
  • [:heavy_check_mark:] I've pasted the output of argocd version.

Describe the bug

When using a Kustomize app that references a remote helm chart in a private repo, credentials are not pass into it.

To Reproduce
✅ Configure 2 private repos in ArgoCD. One repo will be the application source and hold the kustomization.yaml, and the other will be the private helm repo that holds the helm chart in which to use with kustomize. For purposes of this example, the repo with the kustomization.yaml will be called privateKustomizeRepo and the helm chart repo will be called privateHelmRepo

Verify in the ArgoCD UI that both repos have been connected to successfully.

I tried 2 different methods to get past this, and both failed the exact same way:

✔️ Method 1: Letting ArgoCD use kustomize normally

Configure the argocd-cm to have

    kustomize.buildOptions: "--enable-helm --load-restrictor LoadRestrictionsNone"

Create an appset that references a private github repo, with a path to kustomization.yaml

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: kustomize-test-app
spec:
  generators:
  - clusters:
      selector:
        matchLabels:
          is-argocd-managed: "true"
  template:
    metadata:
      name: kustomize-test
      annotations:
        argocd.argoproj.io/compare-options: IgnoreExtraneous
    spec:
      project: default
      source:
        path: path/to/kustomization.yaml
        repoURL: privateKustomizeRepo
        targetRevision: HEAD
      destination:
        server: https://kubernetes.default.svc
        namespace: argocd

create kustomization.yaml in this private repo, at the specified path:

helmCharts:
- name: my-private-chart
  repo: https://raw.githubusercontent.com/myorg/privateHelmRepo/main
  releaseName: private-chart
  version: 0.1.0
  namespace: my-app

✔️ Method 2: Use a custom plugin

Create a plugin:

      - name: kustomized-helm
        init:
          command: ["/bin/sh", "-c"]
          args: ["helm dependency build || true"]
        generate:
          command: ["/bin/sh", "-c"]
          args: ["helm template $ARGOCD_ENV_HELM_CHART_NAME --version $ARGOCD_ENV_HELM_CHART_VERSION --name-template $ARGOCD_ENV_APP_NAME --namespace $ARGOCD_ENV_APP_NAMESPACE --include-crds --repo $ARGOCD_ENV_HELM_REPO_URL  > all.yaml && kustomize build"]

Create an appset that references a private github repo, with a path to kustomization.yaml, and uses our plugin and passes in all of the necessary env vars so that we can template our remote helm chart from our private repo

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: kustomize-test-app
spec:
  generators:
  - clusters:
      selector:
        matchLabels:
          is-argocd-managed: "true"
  template:
    metadata:
      name: kustomize-test
      annotations:
        argocd.argoproj.io/compare-options: IgnoreExtraneous
    spec:
      project: default
      source:
        path: path/to/kustomization.yaml
        repoURL: privateKustomizeRepo
        targetRevision: HEAD
        plugin:
          name: kustomized-helm
          env:
            - name: HELM_REPO_URL
              value: https://raw.githubusercontent.com/myorg/privateHelmRepo/main
            - name: HELM_CHART_NAME
              value: private-chart
            - name: HELM_CHART_VERSION
              value: 0.1.0
      destination:
        server: https://kubernetes.default.svc
        namespace: argocd

create kustomization.yaml in this private repo, at the specified path:

resources:
- all.yaml

Expected behavior

I expected that ArgoCD would pass in the private helm repo credentials when using kustomize to pull the private helm chart. I can see in the UI that ArgoCD has made a successful connection to the private helm repo in the Repositories settings.

Version

$ argocd version
argocd: v2.4.11+3d9e9f2
  BuildDate: 2022-08-22T09:35:38Z
  GitCommit: 3d9e9f2f95b7801b90377ecfc4073e5f0f07205b
  GitTreeState: clean
  GoVersion: go1.18.5
  Compiler: gc
  Platform: linux/amd64

Logs

rpc error: code = Unknown desc = Manifest generation error (cached):
  `kustomize build <path to my kustomization.yaml> --enable-helm --load-restrictor LoadRestrictionsNone` failed exit status 1:
    Error: Error: looks like "https://raw.githubusercontent.com/myorg/privateHelmRepo/main" is not a valid chart repository or cannot be reached:
      failed to fetch https://raw.githubusercontent.com/myorg/privateHelmRepo/main/index.yaml : 404 Not Found :
        unable to run: 'helm pull --untar --untardir <path to my kustomization.yaml>/charts --repo https://raw.githubusercontent.com/myorg/privateHelmRepo/main/index.yaml private-chart --version 0.1.0'
            with env=[HELM_CONFIG_HOME=/tmp/kustomize-helm-654891215/helm HELM_CACHE_HOME=/tmp/kustomize-helm-654891215/helm/.cache HELM_DATA_HOME=/tmp/kustomize-helm-654891215/helm/.data] (is 'helm' installed?)

If I run that same command locally, but pass in --username and --password with the github token that I used to configure the private helm chart repo in ArgoCD already, it works. So it would appear that despite having the repo already properly configured in ArgoCD, these credentials are not being propagated into the kustomize execution.

I did see this section in the ArgoCD docs about remote kustomize bases:

If you have remote bases that are either (a) HTTPS and need username/password (b) SSH and need SSH private key, then they'll inherit that from the app's repo.

This will work if the remote bases uses the same credentials/private key. It will not work if they use different ones. For security reasons your app only ever knows about its own repo (not other team's or users repos), and so you won't be able to access other private repos, even if Argo CD knows about them.

Which I assumed might also apply to a remote helm repo, but it does not. My remote helm repo is using the same username/password token credential as the application repo that holds the kustomization file, but it still doesn't work.

If perhaps there is some way that I can access the repo username/password from within my custom plugin, I could then set it to pass in --username and --password to the cmd line args and maybe get around this, but I haven't found a way to do that yet.

@sandoichi sandoichi added the bug Something isn't working label Sep 29, 2022
@sandoichi
Copy link
Author

after digging into it more, I found that this seems to be a limitation on Kustomize itself, and there is no short term support being added for private helm repos

kubernetes-sigs/kustomize#4401 (comment)

It sounds like in order to support this, ArgoCD would need to have a different application source type that has fields for a kustomize source + a helm source at the same time to pull the helm source first and then the kustomize

@sandoichi
Copy link
Author

Followup:

I got a workaround working by doing the following:

  1. Put a secret into my argocd cluster with helm credentials in it, and mount it to fromEnv values for argocd repo server.
  2. Build a config management plugin that starts off by doing helm repo add using the secret data that is now available in the env (and of course > /dev/null the result of the helm setup commands because any stdout needs to be yaml/json)
  3. after adding the helm repo, I can then helm template the chart, > all.yaml and then run kustomize at that point to get the end result that I wanted

@reginapizza
Copy link
Contributor

@sandoichi is this workaround sufficient for you or are you still requesting this as a feature? Otherwise we can probably close this issue

@sandoichi
Copy link
Author

sandoichi commented Oct 4, 2022

@reginapizza I think that this should be relabeled as a feature request. The workaround is sufficient at the moment but it seems like a common enough thing to support passing in multiple source repos for a single application.

Or would it be better if we closed this and I opened a new feature request issue?

@reginapizza
Copy link
Contributor

That's ok, I can label this as an enhancement and keep this issue open. Anyone looking to do the same can use the workaround in the meantime

@aguckenber-chwy
Copy link

+1 to this. It would even be nice if the ConfigManagementPlugin could at least use helm credentials stored inside ArgoCD repositories like git does #1628 instead of needing to create a kubernetes secret on the side.

@gnagel
Copy link

gnagel commented Feb 14, 2024

+1 to this here too, having private helm charts would be a game changer

@aguckenber-chwy
Copy link

I wanted to provide a detailed work around for future wonderers like me till this is officially supported in a different may.

Disclaimer, once this issue is merged there is probably a more straight forward way to do this.

The below flow explains how I got ArgoCD to work with private helm charts from JFrog Artifactory by rendering Kustomization helmCharts.

This uses @sandoichi's approach as well as a reference as well as this blog on thow o add a custom config plugin to do ArgoCD that works with Kustomize and Helm.

  1. Create a secret with credentials to Artifactory. Feel free to use external-secrets-manager to pull this from some other source.
apiVersion: v1
kind: Secret
metadata:
  name: ks-build-with-jfrog-helm-creds
  namespace: argocd
stringData:
  username: super-secret # The artifactory username
  password: top-secret # The artifactory password
  1. Create the following ConfigMap, note that the plugin invokes kustomize build with --enable-helm which allows it to inflate helm charts but also --helm-command which overwrites the command Kustomize uses - docs about this. By default Kustomize just invokes helm on the PATH, instead, I have it invoke the command.sh script which will add the --username and --password flags to any helm pull commands that come through. The conditional in the script is needed since under the hood, Kustomization uses helm version as well which cannot have the password or username flags.
apiVersion: v1
kind: ConfigMap
metadata:
  name: ks-build-with-jfrog-helm
data:
  plugin.yaml: |
    apiVersion: argoproj.io/v1alpha1
    kind: ConfigManagementPlugin
    metadata:
      name: ks-build-with-jfrog-helm
    spec:
      generate:
        command: [ "sh", "-c" ]
        args: [ "kustomize build --enable-helm --helm-command '/home/argocd/cmp-server/config/command.sh'" ]
  command.sh: |
    #! /bin/bash
    set -e
    args=("$@")
    if [ "${args[0]}" == "pull" ]; then
        extras="--username $ARTIFACTORY_USERNAME --password $ARTIFACTORY_PASSWORD"
    else
        extras=""
    fi
    helm $extras $@
  1. Once the above is done, update the argocd-repo-server deployment to add the following information for the plugin sidecar. Note how the credentials are pulled from the above secret and injected as environment variables.
containers:
- name: ks-build-with-jfrog-helm
  command: [/var/run/argocd/argocd-cmp-server] # Entrypoint should be Argo CD lightweight CMP server i.e. argocd-cmp-server
  image: alpine/k8s:1.25.16
  securityContext:
    runAsNonRoot: true
    runAsUser: 999
  env:
    - name: ARTIFACTORY_USERNAME
      valueFrom:
        secretKeyRef:
          name: ks-build-with-jfrog-helm-creds
          key: username
    - name: ARTIFACTORY_PASSWORD
      valueFrom:
        secretKeyRef:
          name: ks-build-with-jfrog-helm-creds
          key: password
    - name: HELM_CACHE_HOME
      value: /cmp-helm-working-dir
    - name: HELM_CONFIG_HOME
      value: /cmp-helm-working-dir
    - name: HELM_DATA_HOME
      value: /cmp-helm-working-dir
  volumeMounts:
    - mountPath: /var/run/argocd
      name: var-files
    - mountPath: /home/argocd/cmp-server/plugins
      name: plugins
    - mountPath: /home/argocd/cmp-server/config/plugin.yaml
      subPath: plugin.yaml
      name: ks-build-with-jfrog-helm
    - mountPath: /home/argocd/cmp-server/config/command.sh
      subPath: command.sh
      name: ks-build-with-jfrog-helm
    - mountPath: /tmp
      name: cmp-tmp
    - mountPath: /cmp-helm-working-dir
      name: cmp-helm-working-dir
volumes:
- configMap:
    name: ks-build-with-jfrog-helm
    defaultMode: 0777
  name: ks-build-with-jfrog-helm
- emptyDir: {}
  name: cmp-tmp
- emptyDir: {}
  name: cmp-helm-working-dir
  1. The plugin does not do discovery by default and thus must be targeted in your application:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  finalizers:
    - resources-finalizer.argocd.argoproj.io
  name: external-dns
  namespace: argocd
spec:
  destination:
    name: in-cluster
    namespace: kube-system
  project: default
  source:
    path: path/to/folder/with/kustomization
    plugin:
      name: ks-build-with-jfrog-helm # this is how we tell the app to use the new plugin
    repoURL: https://github.com/SomeRepo
    targetRevision: main

And there you go, any valid kustomization.yaml will be rendered and you can deploy apps and/or other resources.
This was my example folder structure:
image

And the kustomization.yaml inside the chart/ looked like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
  - name: external-dns
    namespace: kube-system
    releaseName: external-dns
    repo: https://chewyinc.jfrog.io/artifactory/api/helm/helm-virtual
    valuesInline:
      foo:bar
    version: 6.20.1

I could deploy the chart as well as other resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants