Skip to content

Journal and source for my journey in creating a kubernetes cluster based off of Raspberry Pi 4's

License

Notifications You must be signed in to change notification settings

robermar23/kube-pi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

kube-pi

Log and source for my journey in creating a kubernetes cluster based off of Raspberry Pi 4's

Phase 1: Gather Hardware

How many What
3 or more RaspBerry Pi 4b
1 Gig Network Switch
3 or more Ethernet cables if not using wifi
1 Raspberry Pi Rack Kit
3 or more Raspberry Pi 4 Power Adapters
or
1 USB Power Hub
3 or more USB-c to USB-a cables

Phase 2: Construction

Phase 3: RaspBerry Image Prep and Install

I decided to use HypriotOS as the image for each raspberry pi in the k8s cluster.

Their images are prepped for k8s on arm devices, including docker.

Their images also support config on boot, so we can pre-config users, networking and remote ssh as needed

I found it very easy to use their own flash utility to write the image to the sd card.

  • step 1: Download flash script:
curl -LO https://github.com/hypriot/flash/releases/download/2.5.0/flash
chmod +x flash
sudo mv flash /usr/local/bin/flash
  • step 2: Insert your microSD disk. 32Gb or less is easiest, otherwise you need to perform some extra disk prep. If you don't have an sd card reader, go buy a one.
  • step 3: review 'phase3-image/static.yml' and make any changes you need. you will most likely want to change the hostname and static ip configuration as well as the user section.
  • step 4: run the flash script:
flash -u static.yml https://github.com/hypriot/image-builder-rpi/releases/download/v1.12.0/hypriotos-rpi-v1.12.0.img.zip
flash -u static.yml https://github.com/hypriot/image-builder-rpi/releases/download/v1.12.3/hypriotos-rpi-v1.12.3.img.zip
flash -u static.yml https://github.com/lucashalbert/image-builder-rpi64/releases/download/20200225/hypriotos-rpi64-dirty.zip
  • step 5: rinse and repeat for each Pi you want in your cluster. Make sure to update the hostname and static ip address in phase3-image/static-yml

This will flash the image from the url above. You may need to update the hyperiot image version. It will then apply the config in static.yml

Phase 4: Kubernetes Core Install

Note: install requires root privledges

-- On Each Raspberry Pi (master/node) --

Ensure legacy binaries are installed for iptables

sudo apt-get install -y iptables arptables ebtables

switch to legacy version of iptables (required for kubernetes)

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo update-alternatives --set arptables /usr/sbin/arptables-legacy
sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy

Trust the kubernetes APT key and add the official APT Kubernetes repository:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

Install kubeadm:

apt-get update && apt-get install -y kubeadm

Rinse. Repeat on each pi.

-- On Each Master (only 1 required) --

kubeadm init --pod-network-cidr 10.244.0.0/16

note the ipcdr that is being set. If you are picky and want something else, you'll need to update kube-flannel.yml as its used there as well. See Kubernetes Networking below for more info

Sit back and let it do its thing. If all goes well, you will eventually see a message telling you the master was initialized succesfully. Critically, if provides you the command to run to join other nodes to the cluster. Write that shit down. Should be:

kubeadm join --token=[unique token] [ip of master]

On the master still, finally run:

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

Phase 5: Kubernetes Networking

weave.works's weave-net offers a cni plugin for kubernetes that also supports arm chipsets and, at this time, works out of the box.

Per their instructions:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

If all goes well, this should show a weave pod succesfully running on each node in the kube-system namespace:

kubectl get pods --namespace=kube-system

Phase 6 Join Nodes

-- On Each Node (except Master)

Run the join command (use your actual join command from above):

kubeadm join --token=bb14ca.e8bbbedf40c58788 192.168.0.34

Verify Nodes

using kubectl, check status of all nodes:

kubectl get nodes

If all went well, all nodes will appear with a status of Ready. Your master node should display a Role of master.

note the version of kubernetes. If can be helpful to know the version when troubleshooting

Setup ssh to each node from a client (optiona)

ssh-keygen

for host in 192.168.68.201 \
    192.168.68.202 \
    192.168.68.203 \
    192.168.68.204 \
    192.168.68.205 \
    192.168.68.127; \
    do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
    done

Troubleshooting cluster join

The join token provided at init is only good for 24 hours, so if you need to join a node after that period of time, you can ask for a new token.

On the master:

kubeadm token create

You may see some validation errors, but they can be ignored for now. With the new token, you can run this on the new node:

kubeadm join [master ip address]:6443 --token [new token]--discovery-token-unsafe-skip-ca-verification

Phase 7: Kubernetes Ingress

Decided to go with traefik: https://docs.traefik.io/user-guides/crd-acme/

treaefik crd's:

kubectl apply -f phase6-ingress/controller/traefik-crd.yaml 

deploy example services

kubectl apply -f phase6-ingress/services.yaml 

deploy example apps

kubectl apply -f phase6-ingress/deployment.yaml 
kubectl port-forward --address 0.0.0.0 service/traefik 8000:8000 8080:8080 443:4443 -n default

This has some good steps for testing out your cluster: https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/

Access cluster resources from outside the cluster

kubectl proxy

kubetctl proxy [port]

all traffic over the port is sent to the cluster (specifically the main api)

port forward

kubectl port-forward [service or pod] [port]:[target-port]

Build proxy url's manually:

https://192.168.68.201:6443/api/v1/namespaces/default/services/http:whoami:web/proxy

Use NodePort loadbalancers

This is the avenue you'll most likely use for labs/personal/development use. This type of service will create a LoadBalancer that opens a defined port on each NODE in your cluster. You can view the port opened by describing the service. This is ideal as it allows you to setup something like HAProxy in front of your cluster nodes, defining proxy paths to the port on each node.

Example NodePort type is at: /phase7-ingress/example-http/load-balancer.yaml

Phase 9: Ingress

Phase 9a: HAProxy with DNSMasq for External Ingress

From entry-point, either local client or dedicated linux:

Install HAProxy

sudo apt-get install haproxy

HAProxy config is at:

/etc/haproxy/haproxy.cfg

View output for haproxy:

journalctl -u haproxy.service --since today

If you setup a Service of type NodePort, and Kubernetes started listening on port 31090, you can setup a simple load balancer to proxy traffic to that service using the following in your haproxy.cfg:

Note:

frontend k8s-whoami-proxy
        bind    [local client ip]:[port to expose, usually 80 or 443]
        mode    tcp
        option  tcplog
        default_backend k8s-whoami

backend k8s-whoami
        mode    tcp
        option  tcp-check
        balance roundrobin
        default-server  inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
        server  k8s-node-01 [node1-ip]:31090 check
        server  k8s-node-02 [node2-ip]:31090 check
        server  k8s-node-03 [node3-ip]:31090 check
        server  k8s-node-04 [node4-ip]:31090 check

replace the [node1-ip] and so on with actual ip's of node's in your kubernetes cluster

Install dnsmasq

apt-get install dnsmasq dnsutils

Phase 9b:

Install helm:

Install MetalLB for load balancing:

helm install metallb stable/metallb --namespace kube-system \
  --set configInline.address-pools[0].name=default \
  --set configInline.address-pools[0].protocol=layer2 \
  --set configInline.address-pools[0].addresses[0]=192.168.68.220-192.168.68.250

Install Nginx - Web Proxy for ingress

helm install nginx-ingress stable/nginx-ingress --namespace kube-system \
  --set controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm \
  --set controller.image.tag=0.25.1 \
  --set controller.image.runAsUser=33 \
  --set defaultBackend.enabled=false

Monitor deployment of nginx services:

kubectl --namespace kube-system get services -o wide -w nginx-ingress-controller

Note the EXTERNAL-IP provided. Should be the first ip from the pool of ip's we set above. Notice nginx has exposed 443 and 80,so while right now we get a 404 error in return to curl or in browser, its working.

Install Cert-Manager

Install CRDS's

kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml

Add jetstack helm repo

helm repo add jetstack https://charts.jetstack.io && helm repo update

Install cert-manager through helm

helm install cert-manager jetstack/cert-manager --namespace kube-system

Check status of cert-manager

kubectl get pods -n kube-system -l app.kubernetes.io/instance=cert-manager -o wide

Example ingress object using cert-manager:

./phase7-ingress/cert-manager/example-ingress.yml

Phase 8: Kubernetes Dashboard

https://github.com/kubernetes/dashboard

to deploy:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

we first need to setup cheap local admin access with a serviceaccount.

create a service account that has a cluster-admin role

kubectl apply -f phase8-dashboard/admin-sa.yaml

describe the new local-admin service account to get its access token secret name:

kubectl describe sa local-admin

copy the first "Tokens" secret name you see and describe it like so:

kubectl describe secret local-admin-token-9whqp

copy down the token as you will use it later to login to the dashboard

to access, without setting up any ingress:

kubectl proxy

go to the following url:

https://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.

this is also a good way to gain access to any service running in your cluster without setting up ingress.

Phase 9 Monitoring

Good old nagios (sort of)

I wanted to setup a monitoring solution that was light weight and free. Nagios has been around forever and, as it turned out, you can get the agents and server to work on a raspberry pi.

I am work working on contaierizing nagios core server.

But, for now I have it running on bare metal on a separate pi.

Nagios Core

Nagios nrpe (for nodes you want to monitor)

The scripts here are to get the nrpe agent + nagios plugins installed on each pi running as a kubernetes node.

It involves compiling the source from scratch in order to install on a raspberry pi. It can take some time.

Copy over the script to each node and then run it. It installs nrpe and the core plugins.

From where you have checked out this repo:

rcp phase8-nagios/install_nrpe_source.sh pirate@[node ip]:/tmp

ssh into each node and run the script:

ssh pirate@[node ip]
cd /tmp
chmod +x install_nrpe_source.sh
./install_nrpe_Source.sh

At this point its about setting up your nagios core server to monitor these nodes through check_nrpe.

Prometheus

helm install stable/prometheus-operator --name prometheus-operator -n monitoring

Phase 10: Kubernetes Local/Remote Volume Provisioning

Simple: Direct Attach Storage over USB

One can simply plug an external drive of whatever size they like into a node in your kube-pi. I am going to use my master.

Find disk

fdisk -l

Prep disk

fdisk /dev/sda

fdisk: n, default, default, w

mkfs.ext4 /dev/sda

mount:

mount /dev/sda /mnt/ssd
blkid

get uuid for:

sudo nano /etc/fstab
UUID=x /mnt/ssd ext4 defaults 0 0

29c3398e-e3dd-4497-880f-929acd6c128e ex4

Share our disk via NFS

sudo apt-get install nfs-kernel-server -y

share our disk:

sudo nano /etc/exports

append:

/mnt/ssd *(rw,no_root_squash,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)

Expose nfs share using persistent volume:

kubectl apply -f phase10-persistent-storage/ex-nfs-pv.yml

On each worker node:

sudo apt-get install nfs-common -y
sudo mkdir /mnt/ssd
sudo chown -R pi:pi /mnt/ssd

mount at startup

sudo nano /etc/fstab

append:

192.168.68.201:/mnt/ssd /mnt/ssd nfs rw 0 0

mount now:

mount -t nfs 192.168.68.201:/mnt/ssd /mnt/ssd

Phase 11: Using kube-pi

Helpful Kubernetes commands

Setup local kubectl to connect to remove cluster

scp -r [email protected]:/home/pirate/.kube .
cp -r .kube $HOME/

bam!

Switch namespace

kubectl config set-context --current --namespace=[my-namespace]

Auto complete for kubectl

source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.

Attach to a container that support tty. The net-tool pod is a swiss army knife of network troubleshooting tools in a containerized environment.

kubectl apply phase5-networking/net-tool.yml

This will run a tty capable pod that will wait for you

kubectl attach -it net-tool -c net-tool

That will attach to the net-tool container in the net-tool pod

Export kubernetes object as yaml

kubectl get service kubernetes-dashboard -n kubernetes-dashboard --export -o yaml > phase8-dashboard/dashboard-service.yaml

List listening ports on a given host:

sudo netstat -tulpn | grep LISTEN

clean up completed tranmission files:

find /mnt/ssd/media/downloads/transmission -type f -mtime +7 -exec rm -f {} \;

find arch: 01 for 32bit, 02 for 64bit:

od -An -t x1 -j 4 -N 1 file

ssh into running conatiner:

kubectl exec --stdin --tty -n media sonarr-745fcdcfbd-rjhtl -- /bin/bash
kubectl exec --stdin --tty -n media transmission-transmission-openvpn-56676b4d77-5lkt9 -- /bin/bash

plex media server log location /mnt/ssd/media/Library/Application Support/Plex Media Server/Logs

livenessProbe:
  exec:
    command:
    - /bin/sh
    - -c
    - curl "https://localhost:8989/sonarr/api/health?ApiKey=$(sed -ne '/ApiKey/{s/.*<ApiKey>\(.*\)<\/ApiKey>.*/\1/p;q;}' </config/config.xml)"
  initialDelaySeconds: 30
  periodSeconds: 10
readinessProbe:
  exec:
    command:
    - /bin/sh
    - -c
    - curl "https://localhost:8989/sonarr/api/system/status?ApiKey=$(sed -ne '/ApiKey/{s/.*<ApiKey>\(.*\)<\/ApiKey>.*/\1/p;q;}' </config/config.xml)"
  initialDelaySeconds: 30
  periodSeconds: 10

About

Journal and source for my journey in creating a kubernetes cluster based off of Raspberry Pi 4's

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published