This repo demonstrates how to create AWS EKS cluster by means of IaaC Terraform and assign network resources to it. Below you can find the diagram that illustrates created cluster.
- Dedicated VPC
- 2 public subnets in 2 availability zones A and B
- 2 private subnets in 2 availability zones A and B
- Internet gateway for Future use
- 2 NAT gateways in 2 availability zones A and B to get access for private instances for Future use
- Route Tables
- Route Table Association
- Worker nodes in private subnets
- Scaling configuration - desired size = 2, max size = 10, min_size = 1
- Instances type - spot instances t3.small
- Cluster Role - Let EKS permission to create/use aws resources by cluster.
- Policy - Cluster_Policy
- Node Group Role - Let EC2 permission to create/use aws resources by instances.
- Policy - Worker_Node
- Policy - EKS_CNI
- Policy - Registry_Read
You should have terraform on board and AWS credentials to get access to your AWS account.
git clone https://github.com/Kasper886/Leumi.git
cd Leumi/1.App-2-EKS/
Install Terraform if you don't have it.
If your user is not a root user, ask your admin to add user to sudoers:
sudo visudo
Then install Terraform
chmod +x bash/terraform.sh
bash/terraform.sh
If you want to scale your node group, you need:
To install the tools above, follow the next steps:
chmod +x bash/awscli.sh
chmod +x bash/eksctl.sh
chmod +x bash/kubectl.sh
bash/awscli.sh
bash/eksctl.sh
bash/kubectl.sh
Export AWS credentials and your default region (I worked in us-east-1 region)
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxx
export AWS_DEFAULT_REGION=us-east-1
cd EKS-Cluster/terraform/
terraform init
terraform plan
terraform apply -auto-approve
When the cluster is created you can run the following command to "login" to your EKS cluster:
aws eks update-kubeconfig --name clusterName --region AWS region
Where the clusterName is the name of your cluster (eks), AWS region is the region of AWS (us-east-1)
aws eks update-kubeconfig --name eks --region us-east-1
Then make scaling:
eksctl scale nodegroup --cluster=clusterName --nodes=4 --name=nodegroupName
Where clusterName is name of your cluster (wave-eks), nodegroupName - name of your group name (nodes-01)
eksctl scale nodegroup --cluster=eks --nodes=4 --name=nodes-01
If you have done and don't want to deploy the application, delete EKS cluster. If not, then read bellow
terraform destroy -auto-approve
cd ../..
chmod +x bash/docker.sh
bash/docker.sh
To use Docker without sudo, run:
sudo usermod -aG docker ${USER}
su - ${USER}
docker run -u 0 -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
To get the password to unlock Jenkins at the frirst launch run:
sudo docker exec ${CONTAINER_ID or CONTAINER_NAME} cat /var/jenkins_home/secrets/initialAdminPassword
where CONTAINER_ID or CONTAINER_NAME - your running container name
You can get an error with Docker: Got permission denied while trying to connect to the Docker. So, run:
usermod -aG docker jenkins
usermod -aG root jenkins
chmod 777 /var/run/docker.sock
It's not safety, but then you can return permissions to 744 or 755
- CloudBees AWS credentials;
- Kubernetes Continuous Deploy (v.1.0.0), you can download this file and upload it in advanced settings in Jenkins plugin management section;
- Docker;
- Docker Pipeline;
- Amazon ECR plugin.
Go to Jenkins -> Manage Jenkins -> Global credentials section and add AWS credentials with ID ecr
Then input the following command to get EKS config if you didn't it before in previous section:
aws eks update-kubeconfig --name eks --region us-east-1
and
cat /home/ubuntu/.kube/config
Copy result of this command and return to Jenkins credentials section, then create Kubernetes credentials and choose Kubeconfig Enter directly
And input ID K8S (IMPORTANT! Field id K8S should contain all upper-case)
Also, run to get access for Jenkins to your EKS cluster
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
Build your pipeline.
kubectl get svc
Then copy the dns name of the load balancer. It should be something like this: a50fec56374e843a6afbf0f96488e800-1553267577.us-east-1.elb.amazonaws.com and add port 3000 http://a50fec56374e843a6afbf0f96488e800-1553267577.us-east-1.elb.amazonaws.com:3000
http in url is required
git clone https://github.com/Kasper886/guest-book.git
cd guest-book
kubectl delete -f redis-master-controller.yaml
kubectl delete -f redis-slave-controller.yaml
kubectl delete -f guestbook-controller.yaml
kubectl delete service guestbook redis-master redis-slave
terraform destroy -auto-approve
The system has an access from IP 91.231.246.50 (confirmed in AWS security groups) as required, so to have an access from internet from any host the networking load balancer was deployed.
cd 2.EC2
You should already have Terraform and AWS credentials configured from previous steps. So you can run terraform deployment
terraform init
terraform plan
terraform apply -auto-approve
I don't know why but I couldn't attach EC2 instance to NLB target group. So, you should do it manually in AWS console (see demo)
When you don't need the rescources anymore, you can delete them:
terraform destroy -auto-approve