Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to (re)generate roles.yaml using RBAC markers #6716

Closed
DazWilkin opened this issue Apr 3, 2024 · 8 comments
Closed

Unable to (re)generate roles.yaml using RBAC markers #6716

DazWilkin opened this issue Apr 3, 2024 · 8 comments
Labels
language/go Issue is related to a Go operator project lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it. triage/support Indicates an issue that is a support question.
Milestone

Comments

@DazWilkin
Copy link

DazWilkin commented Apr 3, 2024

Type of question

General operator-related help

Question

What did you do?

I'm attempting to migrate an existing (working) v3 Operator to v4 using Migration from go/v3 to go/v4

Aside: I've been using Operator SDK for a couple of years (on-and-off) and find the bundle of technologies very difficult to understand. I started with Operator SDK but think I could have just used kubebuilder. I was unaware I was using go/v3 and had to stumble around to (and still don't fully understand) what go/v4 is but I'm confident I'm using it correctly as the PROJECT file contains layout with one value go.kubebuilder.io/v4.

What did you expect to see?

The Operator worked outside a cluster. After Deployment to a cluster, I received a bunch of role errors and noticed that the role.yaml file was basic (unchanged):

role.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: clusterrole
    app.kubernetes.io/instance: manager-role
    app.kubernetes.io/component: rbac
    app.kubernetes.io/created-by: my-operator-v4
    app.kubernetes.io/part-of: my-operator-v4
    app.kubernetes.io/managed-by: kustomize
  name: manager-role
rules:
 - apiGroups: [""]
   resources: ["pods"]
   verbs: ["get", "list", "watch"]

I have 4 controllers and each has kubebuilder:rbac annotations|markers, e.g.:

check_controller.go

// +kubebuilder:rbac:groups=ack.al,resources=checks,verbs=get;list;watch;create;update;patch;delete

Curiously, when I:

Moreover (!) if I rename role.yaml and run either of the above commands, no role.yaml is (re)created.

make manifests
/path/to/my-operator-v4/bin/controller-gen-v0.14.0 \
rbac:roleName=manager-role \
crd \
webhook \
paths="./..." \
output:crd:artifacts:config=config/crd/bases

make generate
/path/to/my-operator-v4/bin/controller-gen-v0.14.0 \
object:headerFile="hack/boilerplate.go.txt" \
paths="./..."

And:

controller-gen-v0.14.0 
kustomize-v5.3.0 
manager

What did you see instead? Under which circumstances?

I expected role.yaml to be (re)generated

I checked the earlier (v3) Operator and it contains rules entries for each CRD for {resource}, {resource}/finalizers and {resource}/status.

I've manually edited role.yaml with apiGroups, resources and verbs for the CRDs and the Operator WAI.

Environment

Operator type:

/language go

Kubernetes cluster type:

GKE

$ operator-sdk version

operator-sdk version: "v1.34.1", commit: "edaed1e5057db0349568e0b02df3743051b54e68", kubernetes version: "1.28.0", go version: "go1.21.7", GOOS: "linux", GOARCH: "amd64"

$ go version (if language is Go)

go version go1.22.0 linux/amd64

$ kubectl version

Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.3-gke.1093000

Additional context

@openshift-ci openshift-ci bot added the language/go Issue is related to a Go operator project label Apr 3, 2024
@everettraven
Copy link
Contributor

This seems like it could be an issue with either Kubebuilder's kustomize/v2 plugin, the controller-gen command, or controller-gen itself. I haven't had time, and likely won't have much time soon, to dig into this further.

I'm also not sure the motivation to upgrade your operator's scaffolding from the go/v3 to go/v4 format? This isn't needed and is not a trivial task for operator projects that have been heavily customized. Typically to do this the recommendation would be to scaffold a new go/v4 operator and transplant the business logic from the go/v3 operator to the appropriate place in the go/v4 operator.

My personal recommendation would be to stick with the go/v3 format and continue to update your dependencies as needed.

@DazWilkin
Copy link
Author

DazWilkin commented May 6, 2024

Thanks for taking the time to provide your feedback.

I periodically revisit the Operator SDK documentation to Upgrade SDK Version.

I'm paranoid that I'll end up with technical debt by not keeping the Operator current.

I suspect during the most recent upgrade, I noticed the go/v3 vs go/v4 docs.

Honestly, I've built this working Operator but the documentation's terms are still confusing to me. I don't understand what the 'go/vX plugin' even specifically refers to but, reading the documentation, it felt like something that I should be maintaining currency on too (so I did).

I did scaffold a new go/v4 operator and migrated the controllers etc. into it.

@acornett21 acornett21 added the triage/support Indicates an issue that is a support question. label May 7, 2024
@acornett21 acornett21 added this to the Backlog milestone May 7, 2024
@acornett21
Copy link
Contributor

@DazWilkin Am I reading the below correctly?

I did scaffold a new go/v4 operator and migrated the controllers etc. into it.

That you resolved this issue and this can be closed?

@acornett21 acornett21 added the triage/needs-information Indicates an issue needs more information in order to work on it. label May 7, 2024
@DazWilkin
Copy link
Author

Unfortunately, that's incorrect.

I scaffolded a new go/v4 operator, migrated the controllers etc. but that's where I'm observing the behavior.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 6, 2024
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 5, 2024
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Oct 6, 2024
Copy link

openshift-ci bot commented Oct 6, 2024

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
language/go Issue is related to a Go operator project lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it. triage/support Indicates an issue that is a support question.
Projects
None yet
Development

No branches or pull requests

4 participants