Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve private repository authentication handling strategy for remote URLs #4295

Closed
abstractalchemist opened this issue Nov 17, 2021 · 20 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/under-consideration

Comments

@abstractalchemist
Copy link

Describe the bug

I am receiving an error using kustomize and pointing to a remote kustomize target in AWS CodeCommit.

Files that can reproduce the issue

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- git::https://git-codecommit.us-west-2.amazonaws.com/v1/repos/test-flux-deploymentsources

Expected output

There is a standard kubernetes deployment file

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}

and kustomize file

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yml
namespace: default

Actual output

ssm-user@k8s-control:~/test-project$ kustomize build
Username for 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos': Administrator-at-308692676076
Password for 'https://Administrator-at-308692676076@git-codecommit.us-west-2.amazonaws.com/v1/repos':
Error: accumulating resources: accumulation err='accumulating resources from 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos/test-flux-deployment': missing Resource metadata': git cmd = '/usr/bin/git fetch --depth=1 origin HEAD': exit status 128

Kustomize version

4.4.1

Platform

Linux

Additional context

I've also tried

kustomize build git::https://git-codecommit.us-west-2.amazonaws.com/v1/repos/test-flux-deployment --stack-trace

failing with the error

Username for 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos': Administrator-at-308692676076
Password for 'https://Administrator-at-308692676076@git-codecommit.us-west-2.amazonaws.com/v1/repos':
Error: git cmd = '/usr/bin/git fetch --depth=1 origin HEAD': exit status 128

I have already independently verified that the user name and password provided can access the repository via the https url when using the git binary installed on the system.

ssm-user@k8s-control:~/test-kustomize$ git version
git version 2.17.1
ssm-user@k8s-control:~/test-kustomize$
@abstractalchemist abstractalchemist added the kind/bug Categorizes issue or PR as related to a bug. label Nov 17, 2021
@k8s-ci-robot
Copy link
Contributor

@abstractalchemist: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 17, 2021
@RappC
Copy link

RappC commented Nov 18, 2021

Having a similar error which I think is related to that issue:
In my overlay kustomization.yaml we reference another private repository

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - "[email protected]:my-org/charts//kustomize/base/app?ref=kustomize"

The overlay and the referenced resource live in separate private repositories within the same GitHub Org.

When running kustomize build <path-to-kustomization-yaml> locally the manifests are rendered as expected.
When running from within a github action it does not work and gives the following error:

Error: accumulating resources: accumulation err='accumulating resources from '[email protected]:my-org/charts//kustomize/base/app?ref=kustomize': evalsymlink failure on '/home/runner/work/gitops-shared-dev-app/gitops-shared-dev-app/kustomizations/overlays/shared-dev/[email protected]:my-org/charts/kustomize/base/app?ref=kustomize' : lstat /home/runner/work/gitops-shared-dev-app/gitops-shared-dev-app/kustomizations/overlays/shared-dev/[email protected]:my-org: no such file or directory': git cmd = '/usr/bin/git fetch --depth=1 origin kustomize': exit status 128

@runderwoodcr14
Copy link

I'm having the same exact issue, this is definitively related with the reference in the overlay to the base which is located in a different private repository but in within the same Github Org, the problem is only with the Github workflows, did you ever found a solution for this?

@bradenwright
Copy link

I'm running into this issue too, if someone got past the issue that would be great. What makes it weird is that the config was working we just added files to the repo we are referencing though

@RotemBirman
Copy link

Same here when running kustomize build . --enable_alpha_plugins from CLI

@neoakris
Copy link

neoakris commented Mar 13, 2022

sign I'm running into this too/a really similar error, weird thing is this was 100% work for me 2 hours ago. (from my terminal, iterm2 on Mac) then I opened a new terminal got prompted for a ohmyzsh update and now either kustomize or my shell is broken, uninstalled ohmyzsh and still broken. Not sure what's going on, but posting incase the extra info helps + want to subscribe to the thread.

Whats weird is the examples in kustomize -h & kubectl kustomize -h are also broken and they're broken for zsh & bash with slightly different error messages depending on if I use zsh or bash (not much difference between the kustomize that's baked into kubectl vs standalone kustomize) (the examples below come directly from the --help function / flag)

zsh_prompt# kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6
zsh_prompt# kustomize build https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6

zsh: no matches found: https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6

bash_prompt# kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6
bash_prompt# kustomize build https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6

error: hit 27s timeout running '/usr/local/bin/git fetch --depth=1 origin v1.0.6'

@neoakris
Copy link

update was able to fix my issue (posting here incase other googlers find this)

to fix zsh I commented out kubectl autocompletion in my ~/.zshrc
then added this to ~/.zshrc
setopt no_nomatch

kustomize's example from kustomize build -h started working normally in zsh (and bash) after that

@rahul-mourya-labs
Copy link

This happens frequently for me, when kustomize has resources pointing to private git repos.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2022
@tpvasconcelos
Copy link

I am experiencing the same issue. Did anyone find a good solution in the meantime?

@peterghaddad
Copy link

I am also experiencing the same issue.

@natasha41575
Copy link
Contributor

It seems that there are multiple requests for better private repository authentication in kustomize. We consider this low priority because the remote URL feature was never intended to be used in production. That being said, if someone has a fleshed out proposal for a better way to authenticate private repositories, please feel free to submit that for review.

Instructions for creating a mini-proposal are here.

@natasha41575
Copy link
Contributor

/retitle Improve private repository authentication handling strategy for remote URLs

@k8s-ci-robot k8s-ci-robot changed the title kustomize build error on remote target Improve private repository authentication handling strategy for remote URLs Jul 6, 2022
@jeacott1
Copy link

jeacott1 commented Jul 7, 2022

@natasha41575

It seems that there are multiple requests for better private repository authentication in kustomize. We consider this low priority because the remote URL feature was never intended to be used in production.

I consider this 'feature' is absolutely key to kustomize being useful. In the same way large chunks of remote configuration can be linked in terraform, and for exactly the same usecases, in my mind this should not be a low priority.
I did make a suggestion on how it 'could' work in #4690 but really kustomize needs a broader plan on this aspect generally.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 6, 2022
@KnVerey KnVerey added kind/feature Categorizes issue or PR as related to a new feature. triage/under-consideration and removed kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 23, 2022
@jhoelzel
Copy link

jhoelzel commented Sep 6, 2022

+1 I seem to be running into the same issue with gitlab private repos

Some variations i tried for gitlab

  • ssh:https://git@<gitlab_url>:<gitlab_user>/<gitlab_group>/<gitlab_repo>//<repo_path>
  • ssh:https://git@<gitlab_url>:<gitlab_user>/<gitlab_group>/<gitlab_repo>.git//<repo_path>
  • ssh:https://git@<gitlab_url>:<gitlab_user>/<gitlab_group>/<gitlab_repo>.git/<repo_path>
  • ssh:https://<gitlab_user>@<gitlab_url>/<gitlab_group>/<gitlab_repo>.git/<repo_path>
  • https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git//<repo_path>
  • git::https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git//<repo_path>?ref=main
  • git::https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git//<repo_path>?ref=main&timeout=120
  • https://<gitlab_url>/<gitlab_group>/<gitlab_repo>//<repo_path>
  • https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git/<repo_path>
  • ssh:https://git@<gitlab_url>:<gitlab_group>/<gitlab_repo>.git//<repo_path>/?ref=main
  • ssh:https://git@<gitlab_url>:<gitlab_group>/<gitlab_repo>//<repo_path>/?ref=main
  • ssh:https://git@<gitlab_url>:<gitlab_group>/<gitlab_repo>/<repo_path>/?ref=main

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 6, 2022
@mfahrul
Copy link

mfahrul commented Oct 14, 2022

Instead of using this url
https://<gitlab_url>/<gitlab_group>/<gitlab_repo>/<repo_path>

I succeeded with this url
https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git/<repo_path>

@xakraz
Copy link

xakraz commented Jan 27, 2023

Hi there 👋🏻

I am facing the same issue starting today... more context is below 😁

Context

  • For various reasons, we were still using kustomize 3.6.1. Everything was "still" running fine.
  • Starting today, with the same Kustomize version, I get the issue where HTTP credentials are requested multiple times during the build command as stated here.
$ kustomize build --enable_alpha_plugins > full.yaml

Username for 'https://github.com': xakraz  
Password for 'https://[email protected]': 
Username for 'https://github.com': xakraz
Password for 'https://[email protected]': 
Username for 'https://github.com': xakraz
Password for 'https://[email protected]': 
Username for 'https://github.com': xakraz
Password for 'https://[email protected]': 

Tests

  • ✔️ I have tried to pull the repo manually through git as explained, it works without prompt
  • ✔️ I have tried to clone the repo manually through git with the same mentioned URL as explained, it works without prompt
  • ❌ I have tried to update to kustomize 4.5.7 and have updated the remote URL for that particular repo (moving away from the go-getter URL format) and it does NOT work
    • same git credentials prompt issue
    • + new error as mentioned previously by others:
Error: accumulating resources: accumulation err='accumulating resources from 'https://github.com/ORG_NAME/REPO_NAME.git/deployment?ref=COMMIT_SHORTSHA': 
URL is a git repository': git cmd = '/usr/bin/git fetch --depth=1 origin COMMIT_SHORTSHA': exit status 128

I have tried to run the latest command manually from the repo directory

$ git fetch --depth=1 origin COMMIT_SHORTSHA
fatal: couldn't find remote ref COMMIT_SHORTSHA

Whereas the commit exists and is reachable through GitHub WebUI 🤔

📝 When I said "today" earlier, that is because the only change that I have in mind is my upgrade from git 2.39.0 to 2.39.1 this morning ...

Update

Regarding the git error with the short commit id, here is the explanation: #3761

So this "works as expected".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/under-consideration
Projects
None yet
Development

No branches or pull requests