-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to start Karpenter using eksctl with EKS API authentication mode. #7750
Comments
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue was closed because it has been stalled for 5 days with no activity. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi All,
I am trying to provision EKS cluster with eksctl and adding Karpenter,
below is the cluster config file and Nodepool file.
I am able to see VPC CNI pods, I have tested and I could see all the prefix mode working as expected,
when I verified Karpenter I see an issue:
Cluster config:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: dev-eks-cluster-1
region: us-west-2
version: '1.28'
tags:
environment: dev
product: msd
karpenter.sh/discovery: dev-eks-cluster-1
cloudWatch:
clusterLogging:
enableTypes: ["*"]
logRetentionInDays: 7
iam:
withOIDC: true
serviceAccounts:
name: aws-load-balancer-controller-sa
namespace: kube-system
wellKnownPolicies:
awsLoadBalancerController: true
tags:
environment: dev
product: msd
name: vpc-cni-sa #addon
namespace: kube-system
attachPolicyARNs:
tags:
environment: dev
product: msd
name: ebs-csi-controller-sa #addon
namespace: kube-system
attachPolicyARNs:
tags:
environment: dev
product: msd
name: efs-csi-controller-sa #addon
namespace: kube-system
attachPolicyARNs:
tags:
environment: dev
product: msd
name: external-dns-sa
namespace: kube-system
wellKnownPolicies:
externalDNS: true
tags:
environment: dev
product: msd
name: cert-manager-sa
namespace: cert-manager
wellKnownPolicies:
certManager: true
tags:
environment: dev
product: msd
name: cwagent-prometheus-sa
namespace: amazon-cloudwatch
#roleName: dev-eks-cluster-cw-pr-role
attachPolicyARNs:
tags:
environment: dev
product: msd
vpc:
#cidr: 10.0.0.0/16
nat:
gateway: Single # other options: Disable, Single (default), HighlyAvailable
clusterEndpoints:
publicAccess: true
privateAccess: true
managedNodeGroups:
privateNetworking: true
minSize: 1
desiredCapacity: 2
maxSize: 4
instanceType: c5.large
volumeSize: 20
volumeType: gp3
maxPodsPerNode: 110
#enablePrefixDelegation: true
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
#volumeEncrypted: true
#volumeKmsKeyID:
karpenter:
version: '0.36.1'
createServiceAccount: true
fargateProfiles: #fargate will have a pod execution role created.
#- name: fp-default
#selectors:
#- namespace: default
#- namespace: kube-system
#tags:
#environment: dev
#product: msd
selectors:
labels:
runon: fargate
tags:
environment: dev
product: msd
selectors:
labels:
runon: fargate
tags:
environment: dev
product: msd
selectors:
labels:
runon: fargate
tags:
environment: dev
product: generic
selectors:
labels:
runon: fargate
tags:
environment: dev
product: msd
#- name: karpenter
#selectors:
#- namespace: karpenter
#labels:
#runon: fargate
#tags:
#environment: dev
#product: msd
addons:
version: latest
configurationValues: '{"env":{"ENABLE_PREFIX_DELEGATION":"true", "ENABLE_POD_ENI":"true", "POD_SECURITY_GROUP_ENFORCING_MODE":"standard"},"enableNetworkPolicy": "true"}'
configurationValues: '{"env":{"ENABLE_PREFIX_DELEGATION":"true", "ENABLE_POD_ENI":"true", "POD_SECURITY_GROUP_ENFORCING_MODE":"standard"},"enableNetworkPolicy": "true"}'
resolveConflicts: overwriteversion: latest
resolveConflicts: overwrite
version: latest #if your control plane is running Kubernetes 1.29, then the kube-proxy minor version can't be earlier than 1.27
resolveConflicts: overwrite
version: latest
resolveConflicts: overwrite
version: latest
resolveConflicts: overwrite
version: latest
resolveConflicts: overwrite
accessConfig:
bootstrapClusterCreatorAdminPermissions: false # default is true.
authenticationMode: API
accessEntries:
- principalARN: arn:aws:iam::xxxxxxx:role/gitops-role
kubernetesGroups: # optional Kubernetes groups
- cloud9eksadmin # groups can used to give permissions via RBAC
accessPolicies: # optional access polices
- policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
accessScope:
type: cluster # or namespace
- principalARN: arn:aws:iam::xxxxx:role/role-eks-cluster-admin
- principalARN: arn:aws:iam::xxxxx:role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_adminaccess_xxxxx
kubernetesGroups: # optional Kubernetes groups
- eksclusadmins # groups can used to give permissions via RBAC
accessPolicies: # optional access polices
- policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
accessScope:
type: cluster # or namespace
- principalARN: arn:aws:iam::xxxx:role/role-eks-admin
kubernetesGroups: # optional Kubernetes groups
- eksadmins # groups can used to give permissions via RBAC
accessPolicies: # optional access polices
- policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy
accessScope:
type: cluster # or namespace
- principalARN: arn:aws:iam::xxxxx:role/role-eks-view
kubernetesGroups: # optional Kubernetes groups
- eksview # groups can used to give permissions via RBAC
accessPolicies: # optional access polices
- policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
accessScope:
type: cluster # or namespace
- principalARN: arn:aws:iam::xxxxxx:role/role-eks-edit
kubernetesGroups: # optional Kubernetes groups
- eksedit # groups can used to give permissions via RBAC
accessPolicies: # optional access polices
- policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy
accessScope:
type: cluster # or namespace
Nodepool:
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
metadata:
labels:
type: karpenter
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values: ["on-demand"]
- key: "node.kubernetes.io/instance-type"
operator: In
values: ["c5.large", "m5.large", "r5.large", "m5.xlarge"]
nodeClassRef:
name: default
limits:
cpu: "1000"
memory: 1000Gi
disruption:
consolidationPolicy: WhenUnderutilized
expireAfter: 720h # 30 * 24h = 720h
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2 # Amazon Linux 2
role: arn:aws:iam::xxxxxx:role/eksctl-KarpenterNodeRole-dev-eks-cluster-1
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: dev-eks-cluster
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: dev-eks-cluster
Issue:
{"level":"ERROR","time":"2024-05-07T20:42:35.707Z","logger":"controller","message":"Reconciler error","commit":"fb4d75f","controller":"nodeclass","controllerGroup":"karpenter.k8s.aws","controllerKind":"EC2NodeClass","EC2NodeClass":{"name":"default"},"namespace":"","name":"default","reconcileID":"41b5303b-7a40-40ff-b5a3-9c097f7f2787","error":"no subnets exist given constraints [{map[karpenter.sh/discovery:dev-eks-cluster] }]; no security groups exist given constraints; creating instance profile, getting instance profile "dev-eks-cluster-1_4067990795380418201", AccessDenied: User: arn:aws:sts::xxxxxxx:assumed-role/eksctl-dev-eks-cluster-1-iamservice-role/1715113464336160994 is not authorized to perform: iam:GetInstanceProfile on resource: instance profile dev-eks-cluster-1_4067990795380418201 because no identity-based policy allows the iam:GetInstanceProfile action\n\tstatus code: 403, request id: 8ae45ca9-eb29-4c46-821a-1faa9bd54edb","errorCauses":[{"error":"no subnets exist given constraints [{map[karpenter.sh/discovery:dev-eks-cluster] }]"},{"error":"no security groups exist given constraints"},{"error":"creating instance profile, getting instance profile "dev-eks-cluster-1_4067990795380418201", AccessDenied: User: arn:aws:sts::xxxxxxx:assumed-role/eksctl-dev-eks-cluster-1-iamservice-role/1715113464336160994 is not authorized to perform: iam:GetInstanceProfile on resource: instance profile dev-eks-cluster-1_4067990795380418201 because no identity-based policy allows the iam:GetInstanceProfile action\n\tstatus code: 403, request id: 8ae45ca9-eb29-4c46-821a-1faa9bd54edb"}]}
I am using EKS in EKS API mode.
Karpenter needs to be using Access entries.
Please suggest what else needs to be changes to use EKS API mode and also overcome the issue.
The text was updated successfully, but these errors were encountered: