Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CrashLoopBackoff when PersistentVolume=true #123

Open
lolszowy opened this issue Jun 15, 2023 · 2 comments
Open

CrashLoopBackoff when PersistentVolume=true #123

lolszowy opened this issue Jun 15, 2023 · 2 comments

Comments

@lolszowy
Copy link

lolszowy commented Jun 15, 2023

Describe the bug
I am getting CrashLoopBackoff of a CouchDB pod, when I am deploying it with persistenVolume.
values for Helm Chart

clusterSize: '1'
persistentVolume:
  enabled: 'true'
  storageClass: 'efs-sc-couchdb'
couchdbConfig:
  couchdb:
    uuid: <some-uuid>
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc-couchdb
  namespace: <namespace-name>
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: fs-<fsidhere>
  directoryPerms: "760"

Version of Helm and Kubernetes:
Chart version: couchdb-4.4.1
App version: 3.3.2
EKS K8s Cluster version: 1.25

What happened:
✗ kubectl get pod couchdb-beta-couchdb-0
NAME READY STATUS RESTARTS AGE
couchdb-beta-couchdb-0 0/1 CrashLoopBackOff 8 (3m48s ago) 19m

What you expected to happen:
✗ kubectl get pod couchdb-beta-couchdb-0
NAME READY STATUS RESTARTS AGE
couchdb-beta-couchdb-0 1/1 Running

How to reproduce it (as minimally and precisely as possible):
helm upgrade --install couchdb-beta couchdb/couchdb --set clusterSize=1 --set persistentVolume.enabled=true --set persistentVolume.storageClass=efs-sc-couchdb --set persistenVolume.accessModes=ReadWriteOnce --set couchdbConfig.couchdb.uuid=$(curl https://www.uuidgenerator.net/api/version4 2>/dev/null | tr -d -)

✗ kubectl logs couchdb-beta-couchdb-0 -c couchdb
✗ kubectl logs couchdb-beta-couchdb-0 -c init-copy
total 8
-rw-r--r-- 1 root root 101 Jun 15 09:36 seedlist.ini
-rw-r--r-- 1 root root 106 Jun 15 09:36 chart.ini
✗ kubectl get pvc database-storage-couchdb-beta-couchdb-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
database-storage-couchdb-beta-couchdb-0 Bound pvc-5fb45da0-1df7-4a5b-ae9z-6f797b210aex 10Gi RWO efs-sc-couchdb 19h

Anything else we need to know:
All infrastructure is based on AWS service like EKS EFS...
When I deploy it with persistentVolume.enabled=false, everything seems to be ok.

@willholley
Copy link
Member

This seems like an environment specific/general kubernetes problem rather than an issue with the helm chart.

If you can report the output of kubectl describe couchdb-beta-couchdb-0 and kubectl logs couchdb-beta-couchdb-0 --previous those may provide a clue as to why the pod is crashlooping.

@lolszowy
Copy link
Author

Sorry for late reply but I missed the notification about reply.

Name:             couchdb-backup-couchdb-0
Namespace:        tpos-sync
Priority:         0
Service Account:  couchdb-backup-couchdb
Node:             ip-10-3-3-185.eu-west-1.compute.internal/10.3.3.185
Start Time:       Mon, 10 Jul 2023 11:21:41 +0200
Labels:           app=couchdb
                  controller-revision-hash=couchdb-backup-couchdb-845c6bb8f
                  release=couchdb-backup
                  statefulset.kubernetes.io/pod-name=couchdb-backup-couchdb-0
Annotations:      <none>
Status:           Running
IP:               10.3.3.211
IPs:
  IP:           10.3.3.211
Controlled By:  StatefulSet/couchdb-backup-couchdb
Init Containers:
  init-copy:
    Container ID:  containerd:https://8cd79c2a5667d7ea7156458417e02ddbae1835b43692ece5ae8ad8ee67a14429
    Image:         busybox:latest
    Image ID:      docker.io/library/busybox@sha256:2376a0c12759aa1214ba83e771ff252c7b1663216b192fbe5e0fb364e952f85c
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      cp /tmp/chart.ini /default.d; cp /tmp/seedlist.ini /default.d; ls -lrt /default.d;
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 10 Jul 2023 11:21:43 +0200
      Finished:     Mon, 10 Jul 2023 11:21:43 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /default.d from config-storage (rw)
      /tmp/ from config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvxv8 (ro)
Containers:
  couchdb:
    Container ID:   containerd:https://64c36aa72f51b433e131b3b0f64d42a975e166edb3f0660bce9c11481b3e22ed
    Image:          couchdb:2.3.1
    Image ID:       docker.io/library/couchdb@sha256:74652e868a3138638ed68eba103a92ec866aa5f1bf40103c654895f7fb802ca8
    Ports:          5984/TCP, 4369/TCP, 9100/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 10 Jul 2023 11:58:09 +0200
      Finished:     Mon, 10 Jul 2023 11:58:09 +0200
    Ready:          False
    Restart Count:  12
    Liveness:       http-get http:https://:5984/_up delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http:https://:5984/_up delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      COUCHDB_USER:      <set to the key 'adminUsername' in secret 'couchdb-backup-couchdb'>     Optional: false
      COUCHDB_PASSWORD:  <set to the key 'adminPassword' in secret 'couchdb-backup-couchdb'>     Optional: false
      COUCHDB_SECRET:    <set to the key 'cookieAuthSecret' in secret 'couchdb-backup-couchdb'>  Optional: false
      ERL_FLAGS:          -name couchdb  -setcookie monster 
    Mounts:
      /opt/couchdb/data from database-storage (rw)
      /opt/couchdb/etc/default.d from config-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvxv8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  database-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  database-storage-couchdb-backup-couchdb-0
    ReadOnly:   false
  config-storage:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      couchdb-backup-couchdb
    Optional:  false
  kube-api-access-nvxv8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  38m                    default-scheduler  Successfully assigned tpos-sync/couchdb-backup-couchdb-0 to ip-10-3-3-185.eu-west-1.compute.internal
  Normal   Pulling    38m                    kubelet            Pulling image "busybox:latest"
  Normal   Pulled     38m                    kubelet            Successfully pulled image "busybox:latest" in 558.731924ms (558.747224ms including waiting)
  Normal   Created    38m                    kubelet            Created container init-copy
  Normal   Started    38m                    kubelet            Started container init-copy
  Warning  Unhealthy  38m                    kubelet            Readiness probe failed: Get "http:https://10.3.3.211:5984/_up": dial tcp 10.3.3.211:5984: connect: connection refused
  Normal   Pulled     37m (x4 over 38m)      kubelet            Container image "couchdb:2.3.1" already present on machine
  Normal   Created    37m (x4 over 38m)      kubelet            Created container couchdb
  Normal   Started    37m (x4 over 38m)      kubelet            Started container couchdb
  Warning  BackOff    3m38s (x171 over 38m)  kubelet            Back-off restarting failed container couchdb in pod couchdb-backup-couchdb-0_tpos-sync(d2f7ac43-2ac1-4d78-827a-aa2e27b18886)

logs from previous container are empty as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants