More infor https://kubernetes.io/docs/setup/independent/install-kubeadm/
TCP 6443* K8s API Server
TCP 2379-2380 etcd server client API
TCP 10250 Kubelet API
TCP 10251 kube-scheduler
TCP 10252 kube-conroller-manager
TCP 10255 Read-only Kubelet API
TCP 10250 Kubelet API
TCP 10255 Read-Only Kubelet API
TCP 30000-32767 NodePort Services
apt-get update
apt-get install -y docker.io
apt-get update
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
"deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')
You will install these packages on all of your machines:
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.
Eg: ubuntu Installtion
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:
docker info | grep -i cgroup
cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
If the Docker cgroup driver and the kubelet config don’t match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is --cgroup-driver. If it’s already set, you can update like so:
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Otherwise, you will need to open the systemd file and add the flag to an existing environment line. Then restart kubelet:
systemctl daemon-reload
systemctl restart kubelet
Stop swap
swapoff -a
change fstab swap config
yum update
yum install docker
systemclt enable docker
systemctl staet docker
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Disable SElinux or allow k8s from selinux
setenforce 0
(/etc/selinux/config selinux=permissive)
yum install -y kubeadmd kubelet kubectl
service enable kubelet
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
kubeadm init --pod-network-cidr=10.244.0.0/16
Note if you get swap working then you can off your swap (swapoff -a command)
Setup user account
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
setup pod networking communicator fannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
kubectl get pods
kubectl get pods --all-namespace
Adding nodes
kubeadm join --token 388ac2.59769db11b349455 192.168.1.150:6443 --discovery-token-ca-cert-hash sha256:a2a17bdfb29de8c986406c362ca5513b0a94883238ce25c80a1d29ea3a66e70e
From kube Master
kubectl get nodes
Setup kubectl autocompletion
kubectl completion -h
echo "source <(kubectl completion bash)" >> ~/.bashrc
Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. .answer all api calls (it use key value storage call etcd)
determind which nodes reponsible for pods
cloud-controller-manager runs controllers that interact with the underlying cloud providers. The following controllers have cloud provider dependencies:
For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
For setting up routes in the underlying cloud infrastructure
For creating, updating and deleting cloud provider load balancers
For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes
These controllers include in kube controller -
Responsible for noticing and responding when nodes go down.
Responsible for maintaining the correct number of pods for every replication controller object in the system.
Populates the Endpoints object (that is, joins Services & Pods).
Create default accounts and API access tokens for new namespaces.
An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.
The container runtime is the software that is responsible for running containers. Kubernetes supports two runtimes: Docker and rkt.
- Persitent entities in the Kubernetes System.
- Uses these to represent state of the cluster.
- Describe: What application are running. which nodes those applications are running on. Polices around those application
- Kubernetes Object are "records of intent"
Object Spec
- Provided to Kubernetes.
- Describes desired state of objects.
Object Status
- Provided by Kubernetes.
- Describes the actual state of the object.
- Node
- Pods
- Deployments
- Services
- ConfigMaps
- Multiple virtual clustes back by the same vitual cluster.
- Generally for large deployments.
- Provide scope for names.
- Easy way to divide cluster resources.
- Allows for multiple teams of users.
- Allows for resource quotas.
- Special "kube-system" Namespaces (used to diffentiate system pods from user pods).
- Might be a VM or physical machine.
- Services necessary to run pods.
- Managed by the master.
- Services necessary: Container runtime kubelet kube-proxy
- Not inherently created by kubernetes.but by the Cloud Provicer.
- Kubernetes check the node for validity.
- Route controller (google compute cluseter: gce clusters only).
- Service Controller.
- PersistentVolumeLabels controller.
- Assigns CIDR block to a newly registered node.
- Keeps track of the nodes.
- Monitors the node health.
- Evicts pods from unhealthy nodes.
- Can taint nodes based on current conditions in more recent versions.