Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NAT Node not ready, cannot ping wireguard #326

Closed
Pseudow opened this issue Jul 26, 2022 · 1 comment
Closed

NAT Node not ready, cannot ping wireguard #326

Pseudow opened this issue Jul 26, 2022 · 1 comment

Comments

@Pseudow
Copy link

Pseudow commented Jul 26, 2022

Situation

My node behind NAT has some problems with a kubeadm and kilo setup. Firstly the wireguard ip attributed by kilo to my NAT node is not pingable from the master.

To make sure the ports 51820 and 10250 are opened.

Logs

Node description

Here is the description of the node I setup the force endpoint but looks like it doesn't really change anything:

Name:               altarise-third
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=altarise-third
                    kubernetes.io/os=linux
Annotations:        kilo.squat.ai/discovered-endpoints: {}
                    kilo.squat.ai/endpoint: 141.94.168.41:51820
                    kilo.squat.ai/force-endpoint: 141.94.168.41:51820
                    kilo.squat.ai/granularity: location
                    kilo.squat.ai/internal-ip: 192.168.9.33/24
                    kilo.squat.ai/key: utun1pAiMTAf66vUv+iuttfy+dU3Zni1ZWd1mFlQJw0=
                    kilo.squat.ai/last-seen: 1658872323
                    kilo.squat.ai/wireguard-ip: 10.4.0.1/16
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:https:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 26 Jul 2022 21:50:24 +0000
Taints:             node.kubernetes.io/not-ready:NoExecute
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  altarise-third
  AcquireTime:     <unset>
  RenewTime:       Tue, 26 Jul 2022 21:52:28 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 26 Jul 2022 21:50:31 +0000   Tue, 26 Jul 2022 21:50:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 26 Jul 2022 21:50:31 +0000   Tue, 26 Jul 2022 21:50:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 26 Jul 2022 21:50:31 +0000   Tue, 26 Jul 2022 21:50:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Tue, 26 Jul 2022 21:50:31 +0000   Tue, 26 Jul 2022 21:50:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
  InternalIP:  192.168.9.33
  Hostname:    altarise-third
Capacity:
  cpu:                8
  ephemeral-storage:  39987708Ki
  hugepages-2Mi:      0
  memory:             16392064Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  36852671632
  hugepages-2Mi:      0
  memory:             16289664Ki
  pods:               110
System Info:
  Machine ID:                 71cb71bdd3de420da1eb7f74c87e6caa
  System UUID:                0a078d88-4aee-4012-867d-f25d0a80d11c
  Boot ID:                    e95ddd98-3fc0-4d38-8cad-8488bf447226
  Kernel Version:             5.10.0-16-amd64
  OS Image:                   Debian GNU/Linux 11 (bullseye)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd:https://1.4.13
  Kubelet Version:            v1.24.3
  Kube-Proxy Version:         v1.24.3
PodCIDR:                      172.16.2.0/24
PodCIDRs:                     172.16.2.0/24
Non-terminated Pods:          (2 in total)
  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                ------------  ----------  ---------------  -------------  ---
  kube-system                 kilo-89wwn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
  kube-system                 kube-proxy-m8c6r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
  hugepages-2Mi      0 (0%)    0 (0%)
Events:
  Type    Reason                   Age                    From             Message
  ----    ------                   ----                   ----             -------
  Normal  Starting                 10m                    kube-proxy       
  Normal  Starting                 9m27s                  kube-proxy       
  Normal  Starting                 116s                   kube-proxy       
  Normal  NodeHasSufficientMemory  9m35s (x8 over 9m48s)  kubelet          Node altarise-third status is now: NodeHasSufficientMemory
  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m18s)   kubelet          Node altarise-third status is now: NodeHasSufficientMemory
  Normal  RegisteredNode           2m2s                   node-controller  Node altarise-third event: Registered Node altarise-third in Controller

Kilo pod logs

Here are the only logs for this node which come from the kilo pod:

$ kubectl logs -n kube-system kilo-89wwn
Defaulted container "kilo" out of: kilo, install-cni (init)
@Pseudow
Copy link
Author

Pseudow commented Jul 27, 2022

And by the way what is recommanded between installing flannel and kilo-flannel or just kilo? And should I specify node public ip in KUBELET_EXTRA_ARGS?

@Pseudow Pseudow closed this as completed Jul 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant