Skip to content

Learn how to set up the Kubernetes cluster in 30 mins and deploy the application inside the cluster.

License

Notifications You must be signed in to change notification settings

rosehgal/k8s-In-30Mins

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

K8s in 30 mins

This is not a comprehensive guide to learn Kubernetes from scratch, rahter this is just a small guide/cheat sheet to quickly setup and run application with Kubernetes and deploy a very simple application on single workload VM. This repo can be served as quick learning manual to understand kubernetes.

Prerequisite

Table of Contents:

  1. Setting up Kubernetes cluster in VM : 1 VM cluster
    • Spining up a virtual machine with Vagrant : 2GB RAM + 2CPU cores(Atleat)
    • Understanding:
      • kubeadm
      • kubelet
      • kubectl
  2. Kuberenetes pods: How are they different than Docker containers.
  3. Kubernetes Resource:
  4. Kubernetes network manager
    • I will pick up the plugin called Flannel.
  5. Stateless Workload
    • Replicasets & Deployments
  6. Stateful Workloads
    • Persistent Volumes
    • Persistent Volume Claims
  7. Deploying a simple Java sprinboot app in kubernetes cluster
    • Java app deployment with Mysql PV & PVC.
    • Setting up LB service to connect to Springboot application.
      • A short discussion about Cloud Config Manager(CCM)
  8. Understanding advance kubernetes resources:
  9. Next steps

Setting up Kubernetes cluster in VM

  1. Download the Vagrant File.
  2. Download Virtual box and install from here.
  3. Download and install Vagrant.
  4. In the terminal run, the two command to get the VM up and running, with out any config 😄
    # In the same directory where you have downloaded Vagrantfile, run
    vagrant up
    vagrant ssh
    This will download the Ubuntu box image and do the entire setup for you with the help of virtual box. It just need virtual box installed.
  5. The Vagrantfile comes preconfigured with kubeadm, kubelet, kubectl
  6. Check if kubernetes cluster is perfectly installed.
    root@vagrant:/home/vagrant# kubectl version -o json
    {
      "clientVersion": {
        "major": "1",
        "minor": "19",
        "gitVersion": "v1.19.2",
        "gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
        "gitTreeState": "clean",
        "buildDate": "2020-09-16T13:41:02Z",
        "goVersion": "go1.15",
        "compiler": "gc",
        "platform": "linux/amd64"
      },
      "serverVersion": {
        "major": "1",
        "minor": "19",
        "gitVersion": "v1.19.2",
        "gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
        "gitTreeState": "clean",
        "buildDate": "2020-09-16T13:32:58Z",
        "goVersion": "go1.15",
        "compiler": "gc",
        "platform": "linux/amd64"
      }
    }
  7. Start the Kubernetes cluster master node.
    # This will spin up Kubernetes cluster with CIDR: 10.244.0.0/16
    root@vagrant:/home/vagrant# kubeadm init --pod-network-cidr=10.244.0.0/16
    kubeadm join 10.0.2.15:6443 --token 3m5dsc.toup1iv7670ya7wc --discovery-token-ca-cert-hash sha256:73f4983d43f9618522eaccf014205f969e3bacd76c98dd0c
    
    root@vagrant:/home/vagrant# mkdir -p $HOME/.kube
    root@vagrant:/home/vagrant# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    root@vagrant:/home/vagrant# sudo chown $(id -u):$(id -g) $HOME/.kube/config
  8. Conenct other VM to this cluster: Not required in case of single VM cluster. For this run perfectly, make sure:
    • VM to VM connectivity is there.
    • All there kube-* are installed in VM.
    kubeadm join 10.0.2.15:6443 --token 3m5dsc.toup1iv7670ya7wc --discovery-token-ca-cert-hash sha256:73f4983d43f9618522eaccf014205f969e3bacd76c98dd0c
  9. At this point, Kubernetes is installed and cluster master is up, but still we need a agent to provision and manager network for new nodes for us, This is where Flannel comes to rescue. Install Flannel to manager docker network for pods.
    kubectl apply -f \
        https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  10. This step applies, if we wish to use, our master node as worker as well. Which is yes in our case:
    root@vagrant:/home/vagrant# kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule-
    
    # If everything goes well, you will see something like this.
    root@vagrant:/home/vagrant# kubectl get node
    NAME      STATUS   ROLES    AGE     VERSION
    vagrant   Ready    master   3m40s   v1.19.2
    

Run all the commands from root shell.

What are kube*

Kubernetes runs in client server model, similar to the way the docker runs. Kubernetes server exposes kubernetes-api, and each of kubeadm, kubelet and kubectl connect with this kubernetes server api to get the task done. In the master slave model, there are two entities:

  • Control Plane
  • Worker Nodes

Control Plane : Connects with Worker nodes for resource allocation.
Worker nodes : Cluster entitiy that actually allocates tasks and run Pods.

  1. kubeadm:
    • Sets-up the cluster
    • Connect various worker nodes togather.
  2. kubectl:
    • It is a client cli.
    • Connects with control plane kubernetes api server and send execution requests to control plane.
  3. kubelet:
    • Receives request from control planes.
    • Runs in Worker nodes.
    • Runs task over worker nodes.
    • Maintain Pod lifecycle. Not just for pods, but all Kubernetes resources lifecycle.

Kubernetes pods

  • Pods run multiple containers.
  • Pods abstract out multilpe containers into single unit.
  • If two service in pods are both exposing service on same port, the other one wont spin up and it will fail.
  • The unit of Kubernetes work load is called Pod.

How to create a pod

You can create a simple nginx pod with following yaml spec. Save this in file name : pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
Key name Key Description
apiVersion Kubernetes server API
kind Kubernetes Resource type: Pod
metadata.name Name of Kubernetes Pod
spec.container.name Name of Container which will run in a Pod
spec.container.name Name of docker image to run

Run this Pod spec with. kubectl apply -f pod.yml

root@vagrant:/home/vagrant/kubedata# kubectl apply -f pod.yaml
pod/nginx created

# If everything goes OK, you will se something like this.

root@vagrant:/home/vagrant/kubedata# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          43s
root@vagrant:/home/vagrant/kubedata#

Use : kubectl get pods to get the list of all Pods.

  1. Running command into container, running inside Pod. kubectl exec -it <pod_name> -c <container_name> -- <command>
    root@vagrant:/home/vagrant/kubedata# kubectl exec -it nginx -c nginx -- whoami
    root
    
    root@vagrant:/home/vagrant/kubedata# kubectl exec -it nginx -c nginx -- /bin/sh
    # cat /etc/*-release
    PRETTY_NAME="Debian GNU/Linux 10 (buster)"
    NAME="Debian GNU/Linux"
    VERSION_ID="10"
    VERSION="10 (buster)"
    VERSION_CODENAME=buster
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"
    
  2. Running multiple container in one pod.
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      - name: curl
        image: appropriate/curl
        stdin: true
        tty: true
        command: ["/bin/sh"]
    Save this into pod-with-two-containers.yml.
    Run this : kubectl apply -f pod-with-two-containers.yml
  3. Delete a running pod. kubectl delete -f pod-with-two-containers.yml. This will remove the pod mentioned in spec file.
  4. Container in a Pod can connect to another container in same pod with spec.containers.name.
    root@vagrant:/home/vagrant/kubedata# kubectl exec -it nginx -c curl -- /bin/sh
    # curl nginx
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http:https://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http:https://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    #

Kubernetes Resources

Pods

  • Fundamental unit of k8s cluster.
  • Abstraction for container/multiple-containers, running under single name.
  • Discussed in detail : here

Deployments

  • A Deployment provides declarative updates for Pods.
  • The configuration state in yml file, defines how the pods will run in cluster. They can specify:
    • Replicas
    • Resource allocation
    • Connection with Volumes etc.
    • We will see example once we see replicasets

Replicasets

  1. Run deployments in replicas.

  2. Create file with following specification.

    apiVersion: apps/v1
    
    kind: Deployment
    metadata:
      name: nginx
    
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx-app
      template:
        metadata:
          labels:
            app: nginx-app
        spec:
          containers:
          - name: nginx
            image: nginx

    Notice the difference.

    -- kind: Pod
    ++ kind: Deployment
    
    ++ spec:
    ++  replicas: 3
    ++  selector:
    ++    matchLabels:
    ++      app: nginx-app
  3. Remove existing pods(if any) kubectl delete pods --all, and create deployment.

    root@vagrant:/home/vagrant/kubedata# kubectl apply -f deployment-replica.yml
    deployment.apps/nginx created
    
    root@vagrant:/home/vagrant/kubedata# kubectl get deployments
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   0/3     3            0           7s
    
    root@vagrant:/home/vagrant/kubedata# kubectl get deployments -w
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   1/3     3            1           14s
    nginx   2/3     3            2           20s
  4. Get the list of all deployments: kubectl get deployments or kubectl get deploy

  5. Get the list of all replicaset : kubectl get replicaset or kubectl get rs

    root@vagrant:/home/vagrant/kubedata# kubectl get pods
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-d6ff45774-f84l8   1/1     Running   0          4m59s
    nginx-d6ff45774-gzxfz   1/1     Running   0          4m59s
    nginx-d6ff45774-t69mw   1/1     Running   0          4m59s
    
    root@vagrant:/home/vagrant/kubedata# kubectl get deploy
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   3/3     3            3           162m
    
    root@vagrant:/home/vagrant/kubedata# kubectl get replicaset
    NAME              DESIRED   CURRENT   READY   AGE
    nginx-d6ff45774   3         3         3       162m
    
    root@vagrant:/home/vagrant/kubedata#
  6. Print a detailed description of the selected resources, including related resources such as events or controllers: kubectl describe <resource_type> <resouce_name>

  7. Get deployment configuration in JSON format: kubectl get deployment nginx -o yaml.

Services

  • Logical abstraction of Pods and policies to access them.
  • They enable loose coupling between dependent Pods. e.g
    • Open Ports.
    • Security Policies between Pod interaction etc.
  • Can be created independent of Pod declaration, but usually services linked to one Pod are present in same spec file.
  • Lets create a simple service to expose nginx service port to host machine. File
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx
        image: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  • Service declaration starts by augmenting exiting deployment/pod spec with ---.
  • Service and Pod can share same names.
    • Each different resource must have unique amongst themselves.
  • The above service, exposes port 80 on host specified by spec.ports.port to port 80 of target pod specified by spec.ports.taregtPort
root@vagrant:/home/vagrant/kubedata# kubectl apply -f nginx-service.yml
deployment.apps/nginx unchanged
service/nginx created

root@vagrant:/home/vagrant/kubedata#
  • Once the service is created:
    • Run : kubectl get services to get the list of services.
      root@vagrant:/home/vagrant/kubedata# kubectl get services
      NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
      kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d5h
      nginx        ClusterIP   10.104.178.240   <none>        80/TCP    49s
      Cluster IP is the IP interface of Pod anstraction on host. curl cluster IP will connect us to the Pod.
      root@vagrant:/home/vagrant/kubedata# curl 10.104.178.240
      <!DOCTYPE html>
      <html>
      <head>
      <title>Welcome to nginx!</title>
      <style>
          body {
              width: 35em;
              margin: 0 auto;
              font-family: Tahoma, Verdana, Arial, sans-serif;
          }
      </style>
      </head>
      <body>
      <h1>Welcome to nginx!</h1>
      <p>If you see this page, the nginx web server is successfully installed and
      working. Further configuration is required.</p>
      
      <p>For online documentation and support please refer to
      <a href="http:https://nginx.org/">nginx.org</a>.<br/>
      Commercial support is available at
      <a href="http:https://nginx.com/">nginx.com</a>.</p>
      
      <p><em>Thank you for using nginx.</em></p>
      </body>
      </html>
    • Run : kubectl get endpoints or kubectl get ep to get list of exposed endpoints.
      root@vagrant:/home/vagrant/kubedata# kubectl get ep
      NAME         ENDPOINTS                                    AGE
      kubernetes   10.0.2.15:6443                               2d5h
      nginx        10.244.0.10:80,10.244.0.8:80,10.244.0.9:80   2m
      Since I am running 3 different replicas, we are seeing 3 different Pod IPs.

Loadbalancer Service

  • Notice External IP in:
    root@vagrant:/home/vagrant/kubedata# kubectl get services
    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d5h
    nginx        ClusterIP   10.104.178.240   <none>        80/TCP    49s
  • Since we are running this in local setup, we dont have any CCM(Cloud Config manager), which can provision external IP for us to connect to the service running inside the Pod.
    • In case of Azure or AWS Cloud providers, the CCM provisions and links external IPs for us.
  • So lets do a hack here.
    • Update nginx service to LoadBalancer. File
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx-app
        ports:
        - protocol: TCP
          port: 80
          targetPort: 80
      Notice:
      spec:
      ++ type: LoadBalancer
    • Apply the config: kubectl apply -f nginx-service-lb.yml
      root@vagrant:/home/vagrant/kubedata# kubectl get svc
      NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
      kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        2d5h
      nginx        LoadBalancer   10.104.178.240   <pending>     80:32643/TCP   17m
      Now the state is pending :)
    • Run netstat -nltp, and notice the kube-proxy
      ++ tcp        0      0 0.0.0.0:32643           0.0.0.0:*               LISTEN      13095/kube-proxy
         tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      7024/kubelet
      ++ tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      13095/kube-proxy
      See the magic.
      root@vagrant:/home/vagrant/kubedata# curl 0.0.0.0:32643
      <!DOCTYPE html>
      <html>
      <head>
      <title>Welcome to nginx!</title>
      <style>
          body {
              width: 35em;
              margin: 0 auto;
              font-family: Tahoma, Verdana, Arial, sans-serif;
          }
      </style>
      </head>
      <body>
      <h1>Welcome to nginx!</h1>
      <p>If you see this page, the nginx web server is successfully installed and
      working. Further configuration is required.</p>
      
      <p>For online documentation and support please refer to
      <a href="http:https://nginx.org/">nginx.org</a>.<br/>
      Commercial support is available at
      <a href="http:https://nginx.com/">nginx.com</a>.</p>
      
      <p><em>Thank you for using nginx.</em></p>
      </body>
      </html>
      • The LoadBalancer exposed the service endpoints out of Kubernetes cluster IP interface and in our vagrant host we can access it now directly :)
      • The next challenge to to expose this kube-proxy interface to host machine. And hack is done, then we can access the service running in Pod(replica set deployment) from our host interface directly.
      • This is how the network now looks like. The port 32643 is not exposed through kube-proxy over host/control-plane node.
                                                          Kubernetes Cluster
                                           +---------------------------------------------+
                                           |                               POD           |
                                           |                           +---------+       |
                                           |                    +------>  NGINX  |       |
                                           |                    |      +---------+       |
                                           |           LB       |                        |
                     +--------------+      |    +---------------+          POD           |
        0.0.0.0:32643|  Kube Proxy  |80    |    |               |      +---------+       |
                <------------------>----------->+    SERVICE    +------>  NGINX  |       |
                     |              |      |  80|               |      +---------+       |
                     +--------------+      |    +---------------+                        |
                           HOST            |                    |          POD           |
                                           |                    |      +---------+       |
                                           |                    +------>  NGINX  |       |
                                           |                           +---------+       |
                                           +---------------------------------------------+

Stateless workloads

  • Deployments and Replicasets that we had deployed so far are stateless workloads.
  • There is no state related information stored at Pods/Service, so request from kube-proxy via serivce resource can be routed to any of the Pod in the cluster.
  • This constitutes stateless workload.
  • Next section is to create a Stateful workload.