Skip to content

Latest commit

 

History

History
252 lines (208 loc) · 8.03 KB

README.md

File metadata and controls

252 lines (208 loc) · 8.03 KB

AWS EKS cluster creation by Terraform and application deployment and EC2 with AWS NLB deplyment

This repo demonstrates how to create AWS EKS cluster by means of IaaC Terraform and assign network resources to it. Below you can find the diagram that illustrates created cluster.

Image alt

Summary

Network

  1. Dedicated VPC
  2. 2 public subnets in 2 availability zones A and B
  3. 2 private subnets in 2 availability zones A and B
  4. Internet gateway for Future use
  5. 2 NAT gateways in 2 availability zones A and B to get access for private instances for Future use
  6. Route Tables
  7. Route Table Association

Nodes

  1. Worker nodes in private subnets
  2. Scaling configuration - desired size = 2, max size = 10, min_size = 1
  3. Instances type - spot instances t3.small

IAM Role & Policies

  1. Cluster Role - Let EKS permission to create/use aws resources by cluster.
  2. Policy - Cluster_Policy
  3. Node Group Role - Let EC2 permission to create/use aws resources by instances.
  4. Policy - Worker_Node
  5. Policy - EKS_CNI
  6. Policy - Registry_Read

How to do

You should have terraform on board and AWS credentials to get access to your AWS account.

1. Clone repository and start the Terraform script

git clone https://github.com/Kasper886/Leumi.git
cd Leumi/1.App-2-EKS/

Install Terraform if you don't have it.
If your user is not a root user, ask your admin to add user to sudoers:

sudo visudo

Then install Terraform

chmod +x bash/terraform.sh
bash/terraform.sh

If you want to scale your node group, you need:

  1. AWS CLI
  2. EKSCTL
  3. KubeCTL

To install the tools above, follow the next steps:

chmod +x bash/awscli.sh
chmod +x bash/eksctl.sh
chmod +x bash/kubectl.sh
bash/awscli.sh
bash/eksctl.sh
bash/kubectl.sh

2. So, you can launch EKS cluster now:

Export AWS credentials and your default region (I worked in us-east-1 region)

export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxx
export AWS_DEFAULT_REGION=us-east-1
cd EKS-Cluster/terraform/
terraform init
terraform plan
terraform apply -auto-approve

3. Work with EKS cluster

When the cluster is created you can run the following command to "login" to your EKS cluster:

aws eks update-kubeconfig --name clusterName --region AWS region

Where the clusterName is the name of your cluster (eks), AWS region is the region of AWS (us-east-1)

aws eks update-kubeconfig --name eks --region us-east-1

Then make scaling:

eksctl scale nodegroup --cluster=clusterName --nodes=4 --name=nodegroupName

Where clusterName is name of your cluster (wave-eks), nodegroupName - name of your group name (nodes-01)

eksctl scale nodegroup --cluster=eks --nodes=4 --name=nodes-01

4. Finally delete EKS cluster:

If you have done and don't want to deploy the application, delete EKS cluster. If not, then read bellow

terraform destroy -auto-approve

5. Install Docker if you don't have it:

cd ../..
chmod +x bash/docker.sh
bash/docker.sh

To use Docker without sudo, run:

sudo usermod -aG docker ${USER}
su - ${USER}

6. Install Jenkins from Docker

1. Run Jenkins from Docker

docker run -u 0 -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts

To get the password to unlock Jenkins at the frirst launch run:

sudo docker exec ${CONTAINER_ID or CONTAINER_NAME} cat /var/jenkins_home/secrets/initialAdminPassword

where CONTAINER_ID or CONTAINER_NAME - your running container name

You can get an error with Docker: Got permission denied while trying to connect to the Docker. So, run:

usermod -aG docker jenkins
usermod -aG root jenkins
chmod 777 /var/run/docker.sock

It's not safety, but then you can return permissions to 744 or 755

2. Also you need the next plugins:

  • CloudBees AWS credentials;
  • Kubernetes Continuous Deploy (v.1.0.0), you can download this file and upload it in advanced settings in Jenkins plugin management section;
  • Docker;
  • Docker Pipeline;
  • Amazon ECR plugin.

plugins3

3. Credentials settings.

Go to Jenkins -> Manage Jenkins -> Global credentials section and add AWS credentials with ID ecr

ECR-cred

Then input the following command to get EKS config if you didn't it before in previous section:

aws eks update-kubeconfig --name eks --region us-east-1

and

cat /home/ubuntu/.kube/config

Copy result of this command and return to Jenkins credentials section, then create Kubernetes credentials and choose Kubeconfig Enter directly

EKS-cred

And input ID K8S (IMPORTANT! Field id K8S should contain all upper-case)

Also, run to get access for Jenkins to your EKS cluster

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

4. Make sure you create Maven3 variable under Global tool configuration.

maven3

5. Create new pipeline in Jenkins and copy Jenkinsfile there.

Build your pipeline.

6. Run the following command to get access from your browser:

kubectl get svc

Then copy the dns name of the load balancer. It should be something like this: a50fec56374e843a6afbf0f96488e800-1553267577.us-east-1.elb.amazonaws.com and add port 3000 http://a50fec56374e843a6afbf0f96488e800-1553267577.us-east-1.elb.amazonaws.com:3000

http in url is required

7. To delete the services and deployments without cluster destroying run:

git clone https://github.com/Kasper886/guest-book.git
cd guest-book
kubectl delete -f redis-master-controller.yaml
kubectl delete -f redis-slave-controller.yaml
kubectl delete -f guestbook-controller.yaml
kubectl delete service guestbook redis-master redis-slave

8. To destroy EKS cluster and ECR repo run:

terraform destroy -auto-approve

AWS EC2 and NLB deplyment by terraform

The system has an access from IP 91.231.246.50 (confirmed in AWS security groups) as required, so to have an access from internet from any host the networking load balancer was deployed.

cd 2.EC2

You should already have Terraform and AWS credentials configured from previous steps. So you can run terraform deployment

terraform init
terraform plan
terraform apply -auto-approve

I don't know why but I couldn't attach EC2 instance to NLB target group. So, you should do it manually in AWS console (see demo)

When you don't need the rescources anymore, you can delete them:

terraform destroy -auto-approve

Demo

Demo1.mp4
Demo2.mp4

screen1

Demo3.mp4