Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attacher Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock #71

Open
joedborg opened this issue Apr 25, 2022 · 4 comments

Comments

@joedborg
Copy link

Provisioner is working fine and is creating buckets in S3. However, the daemon set sits in Container Creation and the attacher is erroring.

$ kubectl logs -l app=csi-provisioner-s3 -c csi-s3 -n kube-system
I0425 15:19:35.175361       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0425 15:19:35.175367       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0425 15:19:35.175571       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"https://var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock", Net:"unix"}
I0425 15:19:35.571124       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I0425 15:19:35.572567       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0425 15:19:35.573690       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0425 15:19:35.574246       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0425 15:20:48.500878       1 utils.go:97] GRPC call: /csi.v1.Controller/CreateVolume
I0425 15:20:48.500900       1 controllerserver.go:87] Got a request to create volume pvc-0050921d-b7f2-4158-aab9-118231645848
I0425 15:20:48.847037       1 controllerserver.go:133] create volume pvc-0050921d-b7f2-4158-aab9-118231645848
$ kubectl logs pod/csi-attacher-s3-0 -n kube-system
I0425 15:19:28.175346       1 main.go:91] Version: v2.2.0-0-g97411fa7
I0425 15:19:28.177151       1 connection.go:153] Connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:38.177314       1 connection.go:172] Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:48.177288       1 connection.go:172] Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:19:58.177282       1 connection.go:172] Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:08.177357       1 connection.go:172] Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:18.177324       1 connection.go:172] Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:28.178425       1 connection.go:172] Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:38.177322       1 connection.go:172] Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
W0425 15:20:48.177307       1 connection.go:172] Still connecting to unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
$ kubectl get all -A
NAMESPACE     NAME                                           READY   STATUS              RESTARTS   AGE
kube-system   pod/calico-node-crbmq                          1/1     Running             0          141m
kube-system   pod/coredns-64c6478b6c-w99ts                   1/1     Running             0          141m
kube-system   pod/calico-kube-controllers-75b46474ff-lnlhw   1/1     Running             0          141m
kube-system   pod/csi-attacher-s3-0                          1/1     Running             0          137m
kube-system   pod/csi-provisioner-s3-0                       2/2     Running             0          137m
default       pod/csi-s3-test-nginx                          0/1     ContainerCreating   0          134m
kube-system   pod/hostpath-provisioner-7764447d7c-5xn8q      1/1     Running             0          133m
kube-system   pod/csi-s3-2wshf                               0/2     ContainerCreating   0          133m

NAMESPACE     NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes           ClusterIP   10.152.183.1     <none>        443/TCP                  141m
kube-system   service/kube-dns             ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   141m
kube-system   service/csi-provisioner-s3   ClusterIP   10.152.183.22    <none>        65535/TCP                137m
kube-system   service/csi-attacher-s3      ClusterIP   10.152.183.217   <none>        65535/TCP                137m

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   141m
kube-system   daemonset.apps/csi-s3        1         1         0       1            0           <none>                   137m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns                   1/1     1            1           141m
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           141m
kube-system   deployment.apps/hostpath-provisioner      1/1     1            1           133m

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-64c6478b6c                   1         1         1       141m
kube-system   replicaset.apps/calico-kube-controllers-75b46474ff   1         1         1       141m
kube-system   replicaset.apps/hostpath-provisioner-7764447d7c      1         1         1       133m

NAMESPACE     NAME                                  READY   AGE
kube-system   statefulset.apps/csi-attacher-s3      1/1     137m
kube-system   statefulset.apps/csi-provisioner-s3   1/1     137m
@joedborg
Copy link
Author

Seemed to fix this with

sudo mkdir /var/lib/kubelet/pods

@maxkrukov
Copy link

hi. Where did you create this dir?

@qyk1995
Copy link

qyk1995 commented Oct 9, 2022

@maxkrukov Is it solved now? How solved?

@fyySky
Copy link

fyySky commented Sep 18, 2023

if you meet the same problems, update the provisioner yaml like this . And i had patched the yaml , but i do not know if the author passes it . so i paste it here
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner-sa
namespace: kube-system

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner
rules:

  • apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  • apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  • apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  • apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch", "update", "patch" ]
  • apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  • apiGroups: [ "storage.k8s.io" ]
    resources: [ "volumeattachments" ]
    verbs: [ "get", "list", "watch", "update", "patch" ]

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
subjects:

  • kind: ServiceAccount
    name: csi-provisioner-sa
    namespace: kube-system
    roleRef:
    kind: ClusterRole
    name: external-provisioner-runner
    apiGroup: rbac.authorization.k8s.io

kind: Service
apiVersion: v1
metadata:
name: csi-provisioner-s3
namespace: kube-system
labels:
app: csi-provisioner-s3
spec:
selector:
app: csi-provisioner-s3
ports:
- name: csi-s3-dummy
port: 65535

kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-provisioner-s3
namespace: kube-system
spec:
serviceName: "csi-provisioner-s3"
replicas: 1
selector:
matchLabels:
app: csi-provisioner-s3
template:
metadata:
labels:
app: csi-provisioner-s3
spec:
serviceAccount: csi-provisioner-sa
tolerations:
- key: node-role.kubernetes.io/master
operator: "Exists"
containers:
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v2.1.0
args:
- "--csi-address=$(ADDRESS)"
- "--v=4"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver
- name: csi-s3
image: ctrox/csi-s3:v1.2.0-rc.2
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(NODE_ID)"
- "--v=4"
env:
- name: CSI_ENDPOINT
value: unix:https:///var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
imagePullPolicy: "Always"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v2.2.0
args:
- "--v=4"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver
volumes:
- name: socket-dir
emptyDir: {}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants