This is a repository that can provision a Kubernetes cluster using Vagrant.
You can enable a challenge that will mess up some components, so you can test your knowledge in infrastructure, try to complete the tasks to test your operation knowledge or both.
To use it, you need to install Vagrant and also an hypervisor like VirtualBox or Libvirt, unfortunately HyperV support is limited, since Vagrant cannot create networks inside it.
Four machines will be created, ensure that you have enough free memory:
Machine | IP | CPU | Memory |
---|---|---|---|
control | 192.168.56.10 | 2 | 2048 |
worker1 | 192.168.56.20 | 1 | 1024 |
worker2 | 192.168.56.30 | 1 | 1024 |
storage | 192.168.56.40 | 1 | 512 |
You can change the default memory/CPU of each virtual machine changing the hash named vms
inside the Vagrantfile
:
vms = {
'control' => {'memory' => '2048', 'cpus' => 2, 'ip' => '10', 'provision' => 'control.sh'},
'worker1' => {'memory' => '1024', 'cpus' => 1, 'ip' => '20', 'provision' => 'worker.sh'},
'worker2' => {'memory' => '1024', 'cpus' => 1, 'ip' => '30', 'provision' => 'worker.sh'},
'storage' => {'memory' => '512', 'cpus' => 1, 'ip' => '40', 'provision' => 'storage.sh'}
}
If you are starting in Kubernetes I recommend to take a look into minikube because this repository is focused in people that want to understand its infrastructure.
Install Vagrant - and maybe some plugin - a hypervisor, download or clone the repository and execute vagrant up
:
git clone [email protected]:hector-vido/kubernetes.git --config core.autocrlf=false
cd kubernetes
vagrant up
Important: the option
--config core.autocrlf=true
configure Windows to not add\r
at line ends.
After the provisioning, every command should be execute from control as root
user:
vagrant ssh control
sudo -i
kubectl get nodes
# Output:
#
# NAME STATUS ROLES AGE VERSION
# control Ready control-plane 82m v1.31.0
# worker1 Ready <none> 82m v1.31.0
# worker2 Ready <none> 82m v1.31.0
The challenge can be activated executing the following command:
k8s-challenge
This will mess up some components and create an outage in the cluster, is up to you fix that.
The tasks are a list of things you should do inside Kubernetes.
You can see the list here or you can execute k8s-tasks
, you can check if you sucessful completed a task executing k8s-check
.
1 - Fix the communication problem between the machines:
1.1 - control....192.168.56.10
1.2 - worker1....192.168.56.20
1.3 - worker2....192.168.56.30
Note: Do not use kubeadm, do not reset the cluster.
SSH with "root" user is allowed between the mentioned machines.
The namespace should always be "default" unless specified.
2 - Provision a pod called "apache" with the image "httpd:alpine".
3 - Create a Deployment called "cgi" with the image "hectorvido/sh-cgi" and a service:
3.1 - The Deployment should have 4 replicas;
3.2 - Create a service called "cgi" for the "cgi" Deployment;
3.3 - The service will respond internally on port 9090.
4 - Create a Deployment called "nginx" based on "nginx:alpine":
4.1 - Update the Deployment to the "nginx:perl" image;
4.2 - Rollback to the previous version.
5 - Create a "memcached:alpine" pod for each worker in the cluster:
5.1 - If a new node is added to the cluster, a replica
of this pod needs to be automatically provisioned inside the new node;
6 - Create a pod with the image "hectorvido/apache-auth" called "auth":
6.1 - Create a Secret called "httpd-auth" based on the file "files/auth.ini";
6.2 - Create two environment variables in the pod:
HTPASSWD_USER and HTPASSWD_PASS with the respective values of "httpd-auth";
6.4 - Create a ConfigMap called "httpd-conf" with the contents of "files/httpd.conf";
6.5 - Mount it inside the pod at "/etc/apache2/httpd.conf" using "subpath";
6.6 - The page should only be displayed by executing the following command:
curl -u developer:secret <pod-ip>
Otherwise an unauthorized message should appear.
Note: No extra configuration is required, Secret and ConfigMap take care of
the entire configuration process.
7 - Create a pod called "tools":
7.1 - The pod should use the "busybox" image;
7.2 - The pod must be static;
7.3 - The pod should only be present in "worker1".
8 - Create a statefulSet called "couchdb" with the image "couchdb"
inside the "database" namespace:
8.1 - Create the "database" namespace;
8.2 - The "headless service" should be called "couchdb" and listen on port 5984;
8.3 - Create the "/srv/couchdb" directory on the "worker2" machine;
8.4 - Create a persistent volume that uses the above directory;
8.5 - The pod can only go to the "worker2" machine;
8.6 - The connection user must be "developer" and the password "secret";
8.7 - Persist the couchdb data on the volume created above;
Note: The directory used by couchdb to persist data is "/opt/couchdb/data".
If you want to see everything working and also test the environment to ensure that nothing is wrong, you can execute k8s-solve
, this command will create everything necessary to solve the tasks. This command will execute as if nothing was tried, if you already did some of the tasks you will probably see some errors.
This repository uses a lot of files shared between the host and the guest machines, make sure a folder named /vagrant
with the content of this repo is present in all machines.
Vagrant do it in different ways: it can simply copy everything, mount a NFS or use more advanced features of other hypervisors.
When Vagrant provision a machine with VirtualBox the content of /vagrant
will be populated with a rsync
.
When Vagrant provision a machine with Libvirt the content of /vagrant
can be populated with nfs
or virtiofs
.
If you use Linux base in RHEL and have some issues with SELinux or don't want to type your sudo password each time you execute vagrant up
to mount the NFS, you can use virtiofs
to share directories:
~/.vagrant.d/Vagrantfile
Vagrant.configure("2") do |config|
config.vm.provider :libvirt do |libvirt|
libvirt.memorybacking :access, :mode => "shared"
libvirt.qemu_use_session = false
libvirt.system_uri = 'qemu:https:///system'
end
config.vm.synced_folder "./", "/vagrant", type: "virtiofs"
end
Important: The option
qemu_use_session
isfalse
because a common user session cannot create networks.