Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume #80

Open
johnroyer opened this issue Sep 14, 2022 · 11 comments

Comments

@johnroyer
Copy link

johnroyer commented Sep 14, 2022

  • Configuration files as same as examples
  • AWS S3 bucket created after configuration files applied

Only test pod's mountPath changed to /var/www/html.

pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: csi-s3-test-nginx
  namespace: default
spec:
  containers:
   - name: csi-s3-test-nginx
     image: nginx
     volumeMounts:
       - mountPath: /var/www/html
         name: webroot
  volumes:
   - name: webroot
     persistentVolumeClaim:
       claimName: csi-s3-pvc
       readOnly: false

I get an error timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume while creating pods:

$ kubectl get events | tail
6m12s       Normal    Pulled                  pod/csi-s3-95nz6                   Successfully pulled image "ctrox/csi-s3:v1.2.0-rc.2" in 59.670915168s
6m12s       Normal    Created                 pod/csi-s3-95nz6                   Created container csi-s3
6m12s       Normal    Started                 pod/csi-s3-95nz6                   Started container csi-s3
6m9s        Normal    ExternalProvisioning    persistentvolumeclaim/csi-s3-pvc   waiting for a volume to be created, either by external provisioner "ch.ctrox.csi.s3-driver" or manually created by system administrator
6m7s        Normal    Provisioning            persistentvolumeclaim/csi-s3-pvc   External provisioner is provisioning volume for claim "prod/csi-s3-pvc"
6m5s        Normal    ProvisioningSucceeded   persistentvolumeclaim/csi-s3-pvc   Successfully provisioned volume pvc-53f12ea9-9398-49dd-b16c-0454b145b746
2m35s       Normal    Scheduled               pod/csi-s3-test-nginx              Successfully assigned prod/csi-s3-test-nginx to minikube
35s         Warning   FailedAttachVolume      pod/csi-s3-test-nginx              AttachVolume.Attach failed for volume "pvc-53f12ea9-9398-49dd-b16c-0454b145b746" : timed out waiting for external-attacher of ch.ctrox.csi.s3-driver CSI driver to attach volume pvc-53f12ea9-9398-49dd-b16c-0454b145b746
32s         Warning   FailedMount             pod/csi-s3-test-nginx              Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-m66ll]: timed out waiting for the condition
7m22s       Normal    SuccessfulCreate        daemonset/csi-s3                   Created pod: csi-s3-95nz6

Is it an network issue? Or any kind of mis-configurations? Thanks.


environment:

Docker version 20.10.18, build b40c2f6

minikube v1.26.1 on Ubuntu 20.04

Client Version: v1.25.1
Kustomize Version: v4.5.7
Server Version: v1.24.3
@fallmo
Copy link

fallmo commented Sep 16, 2022

I had the same problem, I looked at the logs of the csi-attacher-s3 pod, first i saw Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource. I figured it was a k8s version issue, so I updated the container image of the csi-attacher stateful set, from v2.2.1 to canary (the latest).

kubectl -n kube-system set image statefulset/csi-attacher-s3 csi-attacher=quay.io/k8scsi/csi-attacher:canary

Next I got a permission error: `v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:csi-attacher-sa" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope.

I tried to modify the role bindings but I couldn't find the right combinations so I ended up giving the csi-attacher-sa service account cluster-admin privileges as shown below:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-attacher-all
subjects:
  - kind: ServiceAccount
    name: csi-attacher-sa
    namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin

@denis-ev
Copy link

@fallmo I've had the same problem, but you can just do this and it worked for me.

  - apiGroups: ["storage.k8s.io"]
    resources: ["*"]
    verbs: ["get", "list", "watch", "update", "patch"]

or

  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments","storageclass"]
    verbs: ["get", "list", "watch", "update", "patch"]

@CallMeLaNN
Copy link

Related to #72 (comment)

Mine fixed by using this https://github.com/ctrox/csi-s3/pull/70/files

@mdutkin
Copy link

mdutkin commented Mar 8, 2023

Related to #72 (comment)

Mine fixed by using this https://github.com/ctrox/csi-s3/pull/70/files

did this and it seemed to work (haven't checked it right after)
but then I decided to pull the latest from the repo and did

cd deploy/kubernetes
kubectl apply -f provisioner.yaml
kubectl apply -f attacher.yaml
kubectl apply -f csi-s3.yaml

which made it work 👍

@RobinJ1995
Copy link

So for anyone else who came here after realising that since a Kubernetes upgrade they couldn't create/mount new S3 volumes anymore, I'll save you some time;

  1. Apply the latest provisioner, attacher and csi-s3 files from https://github.com/ctrox/csi-s3/tree/master/deploy/kubernetes
  2. Change the external-attacher-runner ClusterRole to go from resources: ["volumeattachments"] to resources: ["volumeattachments", "volumeattachments/status", "storageclass"]
  3. Find quay.io/k8scsi/csi-attacher:v2.2.0 on the csi-attacher-s3 StatefulSet and bump it up a major version to quay.io/k8scsi/csi-attacher:v3.1.0
  4. Apply

That should have it working again until the next (seemingly inevitable) breaking change :)

@Venkatesh7591
Copy link

So for anyone else who came here after realising that since a Kubernetes upgrade they couldn't create/mount new S3 volumes anymore, I'll save you some time;

  1. Apply the latest provisioner, attacher and csi-s3 files from https://github.com/ctrox/csi-s3/tree/master/deploy/kubernetes
  2. Change the external-attacher-runner ClusterRole to go from resources: ["volumeattachments"] to resources: ["volumeattachments", "volumeattachments/status", "storageclass"]
  3. Find quay.io/k8scsi/csi-attacher:v2.2.0 on the csi-attacher-s3 StatefulSet and bump it up a major version to quay.io/k8scsi/csi-attacher:v3.1.0
  4. Apply

That should have it working again until the next (seemingly inevitable) breaking change :)

After changing the above one pod is went to running state but we are getting this issue.
"root@csi-s3-test-nginx:/var/lib/www/html# ls
ls: reading directory '.': Input/output error
"
Any help on this?

@johnnytolengo
Copy link

@Venkatesh7591 you save my day, it worked. Thank you!

@panghaohao
Copy link

@Venkatesh7591 Hello I have encountered the same problem, have you solved it now

@johnnytolengo
Copy link

follow the @RobinJ1995 instructions.

@ZXiangQAQ
Copy link

Related to #72 (comment)

Mine fixed by using this https://github.com/ctrox/csi-s3/pull/70/files

Many thanks!

@msaustral
Copy link

issue #94

this

- apiGroups: ["storage.k8s.io"]
    resources: ["*"]
    verbs: ["get", "list", "watch", "update", "patch"]

work for us

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests