You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The pod status was always Not Ready and the error container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized was seen.
Steps to Reproduce (for bugs)
Here are all the commands I ran.
$ uname -a
Linux raspi4-claster-4 6.1.58-v8+ #1 SMP PREEMPT Thu Oct 26 17:33:30 JST 2023 aarch64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
$ cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 0 173 1
cpu 0 173 1
cpuacct 0 173 1
blkio 0 173 1
memory 0 173 1
devices 0 173 1
freezer 0 173 1
net_cls 0 173 1
perf_event 0 173 1
net_prio 0 173 1
pids 0 173 1
$ sudo swapoff -a
$ kubelet --version
Kubernetes v1.28.6
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.6", GitCommit:"be3af46a4654bdf05b4838fe94e95ec8c165660c", GitTreeState:"clean", BuildDate:"2024-01-17T13:47:00Z", GoVersion:"go1.20.13", Compiler:"gc", Platform:"linux/arm64"}
$ kubectl version
Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
overlay
br_netfilter
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/98-rpi.conf ...
kernel.printk = 3 4 1 3
vm.min_free_kbytes = 16384
net.ipv4.ping_group_range = 0 2147483647
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.ipv4.ip_forward = 1
kernel.keys.root_maxbytes = 25000000
kernel.keys.root_maxkeys = 1000000
kernel.panic = 10
kernel.panic_on_oops = 1
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv4.ip_local_reserved_ports = 30000-32767
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /usr/lib/sysctl.d/protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.conf ...
net.ipv4.ip_forward = 1
kernel.keys.root_maxbytes = 25000000
kernel.keys.root_maxkeys = 1000000
kernel.panic = 10
kernel.panic_on_oops = 1
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv4.ip_local_reserved_ports = 30000-32767
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
$ wget https://github.com/containerd/containerd/releases/download/v1.7.17/containerd-1.7.17-linux-arm64.tar.gz
$ sudo tar Cxzvf /usr/local containerd-1.7.17-linux-arm64.tar.gz
$ sudo mkdir -p /usr/local/lib/systemd/system
$ sudo wget -O /usr/local/lib/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now containerd
$ wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.arm64
$ sudo install -m 755 runc.arm64 /usr/local/sbin/runc
$ wget https://github.com/containernetworking/plugins/releases/download/v1.5.0/cni-plugins-linux-arm64-v1.5.0.tgz
$ mkdir -p /opt/cni/bin
$ sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.5.0.tgz
$ sudo systemctl status containerd
containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2024-05-19 01:45:20 JST; 1 weeks 6 days ago
Docs: https://containerd.io
Main PID: 908 (containerd)
Tasks: 13
Memory: 1.0G
CPU: 6h 19min 15.903s
CGroup: /system.slice/containerd.service
/usr/local/bin/containerd
Jun 01 20:44:26 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:26.980525509+09:00" level=info msg="Container to stop
Jun 01 20:44:26 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:26.980865709+09:00" level=info msg="Container to stop
Jun 01 20:44:27 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:27.078520679+09:00" level=error msg="StopPodSandbox for
Jun 01 20:44:27 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:27.138124367+09:00" level=info msg="StopPodSandbox for0
Jun 01 20:44:27 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:27.138306606+09:00" level=info msg="Container to stop
Jun 01 20:44:27 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:27.138353235+09:00" level=info msg="Container to stop
Jun 01 20:44:27 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:27.238559532+09:00" level=error msg="StopPodSandbox for
Jun 01 20:44:31 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:31.356146813+09:00" level=error msg="failed to reload cni configuration after receiving fs change event
Jun 01 20:44:31 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:31.356473644+09:00" level=error msg="failed to reload cni configuration after receiving fs change event
Jun 01 20:44:31 raspi4-claster-4 containerd[908]: time="2024-06-01T20:44:31.356599402+09:00" level=error msg="failed to reload cni configuration after receiving fs change event
$ export VIP=192.168.0.40
export INTERFACE=eth0
KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
sudo ctr images pull ghcr.io/kube-vip/kube-vip:$KVVERSION
sudo -E ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--arp \
--leaderElection | sudo -E tee /etc/kubernetes/manifests/kube-vip.yaml
WARN [0000] DEPRECATION: The `mirrors` property of `[plugins."io.containerd.grpc.v1.cri".registry]` is deprecated since containerd v1.5 and will be removed in containerd v2.0. Use `config_path` instead.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "6443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: vip_address
value: 192.168.0.40
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/admin.conf
name: kubeconfig
status: {}
$ sudo kubeadm config images pull
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint 192.168.0.40:6443 --upload-certs
I0601 20:47:31.333420 3278643 version.go:256] remote version is much newer: v1.30.1; falling back to: stable-1.28
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.10
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.10
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.10
[config/images] Pulled registry.k8s.io/kube-proxy:v1.28.10
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.10-0
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1
I0601 20:48:12.975079 3278997 version.go:256] remote version is much newer: v1.30.1; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.10
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local raspi4-claster-4] and IPs [10.96.0.1 192.168.0.14 192.168.0.40]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost raspi4-claster-4] and IPs [192.168.0.14 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost raspi4-claster-4] and IPs [192.168.0.14 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.529635 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
84242288bbfe0ead04dfa08e97921d132290fde50401fa38945ed96f881a24d3
[mark-control-plane] Marking the node raspi4-claster-4 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node raspi4-claster-4 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: rn0ehs.86rnj9zkkec79utw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.0.40:6443 --token rn0ehs.86rnj9zkkec79utw \
--discovery-token-ca-cert-hash sha256:3f383039b0669f507c8ca5f531a395a148ba9142c1fc9ca3f651487f9746afdb \
--control-plane --certificate-key 84242288bbfe0ead04dfa08e97921d132290fde50401fa38945ed96f881a24d3
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.40:6443 --token rn0ehs.86rnj9zkkec79utw \
--discovery-token-ca-cert-hash sha256:3f383039b0669f507c8ca5f531a395a148ba9142c1fc9ca3f651487f9746afdb
$ nc -v 192.168.0.40 6443
Connection to 192.168.0.40 6443 port [tcp/*] succeeded!
^C
$ mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
raspi4-claster-4 NotReady control-plane 44s v1.30.1
$ kubectl describe node raspi4-claster-4
Name: raspi4-claster-4
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=raspi4-claster-4
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"46:77:ca:ac:35:e8"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.0.14
kubeadm.alpha.kubernetes.io/cri-socket: unix:https:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 01 Jun 2024 20:48:44 +0900
Taints: node-role.kubernetes.io/control-plane:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: raspi4-claster-4
AcquireTime: <unset>
RenewTime: Sat, 01 Jun 2024 20:49:58 +0900
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sat, 01 Jun 2024 20:49:53 +0900 Sat, 01 Jun 2024 20:49:53 +0900 FlannelIsUp Flannel is running on this node
MemoryPressure False Sat, 01 Jun 2024 20:49:48 +0900 Sat, 01 Jun 2024 20:48:44 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 01 Jun 2024 20:49:48 +0900 Sat, 01 Jun 2024 20:48:44 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 01 Jun 2024 20:49:48 +0900 Sat, 01 Jun 2024 20:48:44 +0900 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 01 Jun 2024 20:49:48 +0900 Sat, 01 Jun 2024 20:48:44 +0900 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.0.14
Hostname: raspi4-claster-4
Capacity:
cpu: 4
ephemeral-storage: 238992196Ki
memory: 7997564Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 220255207469
memory: 7895164Ki
pods: 110
System Info:
Machine ID: 5189c1693d94426b9a0daf450d588970
System UUID: 5189c1693d94426b9a0daf450d588970
Boot ID: c9f14be8-ba10-45ad-99c6-7c3eb37f4b01
Kernel Version: 6.1.58-v8+
OS Image: Debian GNU/Linux 11 (bullseye)
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd:https://1.7.13
Kubelet Version: v1.30.1
Kube-Proxy Version: v1.30.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-flannel kube-flannel-ds-6g5sc 100m (2%) 0 (0%) 50Mi (0%) 0 (0%) 40s
kube-system etcd-raspi4-claster-4 100m (2%) 0 (0%) 100Mi (1%) 0 (0%) 75s
kube-system kube-apiserver-raspi4-claster-4 250m (6%) 0 (0%) 0 (0%) 0 (0%) 77s
kube-system kube-controller-manager-raspi4-claster-4 200m (5%) 0 (0%) 0 (0%) 0 (0%) 75s
kube-system kube-proxy-cccq2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64s
kube-system kube-scheduler-raspi4-claster-4 100m (2%) 0 (0%) 0 (0%) 0 (0%) 75s
kube-system kube-vip-raspi4-claster-4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 75s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (18%) 0 (0%)
memory 150Mi (1%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 62s kube-proxy
Normal Starting 75s kubelet Starting kubelet.
Warning InvalidDiskCapacity 75s kubelet invalid capacity 0 on image filesystem
Normal NodeHasSufficientMemory 75s kubelet Node raspi4-claster-4 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 75s kubelet Node raspi4-claster-4 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 75s kubelet Node raspi4-claster-4 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 75s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 65s node-controller Node raspi4-claster-4 event: Registered Node raspi4-claster-4 in Controller
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Expected Behavior
Pod status becomes Running.
Current Behavior
The pod status was always Not Ready and the error
container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
was seen.Steps to Reproduce (for bugs)
Here are all the commands I ran.
Context
I want to create K8s HA cluster for my project.
Your Environment
Beta Was this translation helpful? Give feedback.
All reactions