Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a service account generator #3978

Closed
natasha41575 opened this issue Jun 9, 2021 · 6 comments
Closed

Create a service account generator #3978

natasha41575 opened this issue Jun 9, 2021 · 6 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@natasha41575
Copy link
Contributor

natasha41575 commented Jun 9, 2021

#3914 introduces a basic service account generator.

This is necessary because it has a value [email protected] - where project-id must be a configurable value. GKE users currently use vars for this, and replacements will be unable to do so because project-id is not delimited.

We need to

  • update the above service account generator so that it can also generate necessary IAMPolicy resources for GKE users
  • look into use cases outside of GKE and potentially provide generators for other cloud providers if we can

If any interested users can provide some information about their use cases here, that would be very helpful.

Related issues:

Adding this to the Kustomization v1 project because it is tangentially related to vars deprecation.

@natasha41575 natasha41575 self-assigned this Jun 9, 2021
@natasha41575 natasha41575 added this to To do in Kustomization v1 via automation Jun 9, 2021
@k8s-ci-robot
Copy link
Contributor

@natasha41575: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 9, 2021
@natasha41575 natasha41575 added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 9, 2021
@k8s-ci-robot k8s-ci-robot removed the needs-kind Indicates a PR lacks a `kind/foo` label and requires one. label Jun 9, 2021
@natasha41575 natasha41575 added triage/under-consideration triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/under-consideration labels Jun 9, 2021
@natasha41575 natasha41575 added this to To do in Kustomize CLI major version changes via automation Jul 8, 2021
@natasha41575 natasha41575 added this to To do in Release kustomize api 1.0.0 via automation Jul 8, 2021
@marshall007
Copy link

@natasha41575 we are looking to migrate all of our GCP resource provisioning from terraform to Config Connector. Related to your comment in #3914 (comment), I was wondering if you've thought about how to manage transformer configs for such cases.

For projects that expose such a large surface area of CRDs, it seems completely unmanageable for us to maintain transformer configs (like nameReference) ourselves.

I created #4095 a few weeks ago, which I think would ultimately be a big part of the solution here too. It surprises me that this doesn't seem to be a bigger problem for people. Am I missing something or does everyone just get by with brittle transformer configs like we do?

@KnVerey
Copy link
Contributor

KnVerey commented Sep 3, 2021

A bit of a sidenote, but I think this could be another use case for Catalog: instead of building transformers that implement functionality specific to a cloud provider as built-ins, these could be distributed in cloud-provider-specific Catalogs. cc @jeremyrickard

@natasha41575 natasha41575 removed this from To do in Kustomization v1 Oct 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 2, 2021
Release kustomize api 1.0.0 automation moved this from To do to Done Dec 21, 2021
Kustomize CLI major version changes automation moved this from To do to Done Dec 21, 2021
@renaudguerin
Copy link

Hi @natasha41575,
I am assuming IAMPolicyGenerator is where this GKE SA generator is implemented, correct ?
I couldn't find any documentation or examples about that. Could you please let me know if I missed it somewhere, before I dive into the source code to try and figure it out ?
BTW, is this 2 years old feature still the canonical/easiest way to patch GCP project IDs into the iam.gke.io/gcp-service-account of ServiceAccount, or should I use something else like replacements ?

Thanks

@renaudguerin
Copy link

renaudguerin commented Oct 13, 2023

Hi @natasha41575, I am assuming IAMPolicyGenerator is where this GKE SA generator is implemented, correct ? I couldn't find any documentation or examples about that. Could you please let me know if I missed it somewhere, before I dive into the source code to try and figure it out ?

Partly answering my own question : I've read the source code and figured out the syntax, in case this is useful to someone else :

kustomization.yaml :

kind: Kustomization
generators:
  - gkesa.yaml

gkesa.yaml :

kind: IAMPolicyGenerator
metadata:
  name: gkesa
cloud: gke
kubernetesService: 
  namespace: k8s-namespace
  name: k8s-sa-name
serviceAccount:
  name: gsa-name
  projectId: project-id

which results in this output :

apiVersion: v1
kind: ServiceAccount
metadata:
 annotations:
   iam.gke.io/gcp-service-account: [email protected]
 name: k8s-sa-name
 namespace: k8s-namespace

Good. However, unless I'm missing something you now need a gkesa.yaml with a different hard-coded projectId in each overlay, so it appears to simply have moved the problem elsewhere. You can't apply a patch or replacement to this projectId field in the generator, can you ?

If that's correct, then I'm not convinced that this generator is a massive simplification over a replacement per overlay targeting the entire iam.gke.io/gcp-service-account annotation of a ServiceAccount defined in the base.

In fact for such a simple resource, a generator feels like a lot of cognitive overhead for a small DRY benefit : instead of writing slightly different 7 lines long ServiceAccount manifests in each overlay, you end up writing 11 lines long IAMPolicyGenerator in each overlay (ok I suppose some fields are optional, but still..) with only one field differing each time.

Not to mention that, in this (laudable) effort to address valid real-world use cases while still eschewing templating and unstructured edits, you guys end up having to create highly specific generators like this one for each and every common situation that replacements can't handle well (because there aren't clear delimiters available). And us users have to learn about their existence and syntax one by one, like I just did.

Forgive me for beating a dead horse, but I think it goes to show how hard it is to work around templating, when the obvious universal solution to this entire class of problems is probably templating :)

I have the utmost respect for the difficult job of upholding a software architecture vision while fielding hundreds of end user requests for exceptions, and I agree that YAML in / YAML out is a worthy goal.

But I hope you can take the feedback that you're paying (and making users pay) a very high price for that goal, in the form of these absurdly complex constructs that don't even achieve as much as the simpler, less pure alternative.

The eschewed features page argues that "The source yaml gets polluted with $VARs and can no longed be applied as is to the cluster (it must be processed).". I'm sorry, but while true this is purely academic : I can't think of any real world Kustomize installation where the source YAML might be applied as is to the cluster. Processing is always a given.

You do list other (perhaps stronger) arguments, but the fact that you're comfortable including this one at all says a lot about your priorities, I think. Like most engineers, I intellectually admire clean software architectures and long term vision, but please consider that Kustomize is a tool that helps people to do a job, not a piece of art.

If you find yourself constantly advising users to use plugins or a pre/post-processing step for use cases that 95% of them have, or (like with replacements and this generator) end up having to build verbose and unwieldy constructs in an effort to address those needs, perhaps it's a sign that compromises and more pragmatism might be a good idea ?

cc @KnVerey @monopole

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Development

No branches or pull requests

6 participants