Skip to content

AWS EKS cluster using terraform, AWS Controller for Kubernetes (ACK), and ELB Controlller for Kubernetes

Notifications You must be signed in to change notification settings

RichardSobreiro/simple-ecommerce-eksiac-terraform

Repository files navigation

AWS EKS Cluster using terraform

AWS EKS cluster using terraform, AWS Controller for Kubernetes (ACK), and ELB Controlller for Kubernetes

Prerequisites

  1. AWS CLI v2

  2. Terraform

  3. Helm 3.8+

  4. Login into your AWS account:

Here you have many options:

  • AWS Access Key
  • IAM user
  • SSO

We recommend using IAM user or SSO because usually, when developers use Access Key method they create the keys under the Root User profiles, what prevents limiting the access rights for the key (root user cannot be used as the principal while assigning roles).

aws configure

or

aws configure sso

Steps to create the cluster

  • Create initial dependency selections that will initialize the dependency lock file (setting up providers).
terraform init
  • Apply the terraform state
terraform apply -auto-approve
  • Login into EC2 bastion host instance:
ssh -i "aws-terraform-key.pem" [email protected]
  • From Bastion Instance, create an SSH connection to any EC2 node instance from the EKS cluster using the EKS cluster node instance private IP
ssh -i "/tmp/eks_nodes_keypair.pem" [email protected]

Then you should be able to access the EC2 instances from the EKS cluster using the Bastion Host instance as a reverse proxy.

To close the connection, run the following command:

exit -- Closes the SSH connection between the bastion instance and the EKS Cluster instance
exit -- Closes the SSH connection between your local machine and the bastion instance
  • Export Kubernetes context:
aws eks --region us-east-1 update-kubeconfig --name ekscluster-simpleecommerce

Check connection to the control plane:

kubectl get svc
kubectl get pods --all-namespaces

Deploy public NLBs:

kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/public-lb.yaml

Deploy private NLBs:

kubectl apply -f k8s/private-lb.yaml
  • Deploy EKS cluster autoscaler
kubectl apply -f k8s/cluster-autoscaler.yaml

Verify that the autoscaler pod is up and running:

kubectl get pods -n kube-system

Check logs for any errors:

kubectl logs -l app=cluster-autoscaler -n kube-system -f

Verify that AWS autoscaling group has required tags:

k8s.io/cluster-autoscaler/<cluster-name> : owned
k8s.io/cluster-autoscaler/enabled : TRUE

Split the terminal screen. In the first window run:

watch -n 1 -t kubectl get pods

In the second window run:

watch -n 1 -t kubectl get nodes

Now, to trigger autoscaling, by increasing replica for nginx deployment from 1 to 5.

kubectl apply -f k8s/deployment.yaml
  • To remove all the resources created, run the destroy command
kubectl delete -f k8s/cluster-autoscaler.yaml
kubectl delete -f k8s/private-lb.yaml
kubectl delete -f k8s/deployment.yaml
kubectl delete -f k8s/public-lb.yaml
terraform destroy --auto-approve

Remarks

We are using tls_private_key to create a PEM (and OpenSSH) formatted private key. The private key generated by this resource will be stored unencrypted in your Terraform state file. Use of this resource for production deployments is not recommended. Instead, generate a private key file outside of Terraform and distribute it securely to the system where Terraform will be run.

References

About

AWS EKS cluster using terraform, AWS Controller for Kubernetes (ACK), and ELB Controlller for Kubernetes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages