Skip to content

Latest commit

 

History

History
125 lines (85 loc) · 3.58 KB

deployment.md

File metadata and controls

125 lines (85 loc) · 3.58 KB

Kubernetes Cluster Live Testing Guide on AWS

This guide will help set up a Kubernetes cluster on AWS using Terraform, kOps and Helm, and test the deployments with strict frontend access or through an Ingress with the provided URL.

Step 1: Install kubectl

Follow the official kubectl installation guide for your operating system.

Verify the installation:

kubectl version --client

Step 2: Install AWS CLI

Follow the official AWS CLI installation guide for your operating system. Also configure the AWS CLI with your credentials and ensure they can be accessed by Terraform and kOps:

aws configure

Step 3: Install kOps

Follow the official kOps installation guide for your operating system.

Verify the installation:

kops version

Step 4: Install Terraform

Follow the official Terraform installation guide for your operating system.

Verify the installation:

terraform version

Step 5: Provision an S3 Bucket for Terraform State (Strongly recommended but optional)

It is strongly recommended to use an S3 bucket to store the Terraform state file. This ensures that the state file is stored securely and can be accessed by multiple team members. To create an S3 bucket, run the following commands:

aws s3api create-bucket --bucket (your bucket name) --region (your region)

Step 6: Generate SSH Key Pair (Requied if you plan on using a bastion host)

ssh-keygen -t rsa -b 4096 -C "[email protected]"

Step 7: Deploy the Terraform Infrastructure

  1. Navigate to the Terraform Directory:
cd terraform-files
  1. Initialize Terraform:

If using the S3 bucket for the Terraform state, run the following command:

terraform init -backend-config="bucket=(your bucket name)" \
               -backend-config="key=prod/terraform.tfstate" \
               -backend-config="region=(your AWS region)"

Otherwise:

terraform init
  1. Deploy the Terraform Infrastructure:

Note: please ensure in the following command that all the variables are set correctly, please use full directory paths for the ssh_key_path and config_path variables, the configuration below has been set to default values to show the configuration format.

terraform apply -var="aws_region=ca-central-1" \
				-var="az_count=3" \
				-var="use_ingress_controller=false" \
				-var="environment_name=dev" \
				-var="namespace=AWS-Deployment" \
				-var="deployment_name=AWS-Deployment" \
				-var="vpc_name=vpc-dev" \
				-var="ssh_key_path=/home/user/.ssh/id_rsa.pub" \
				-var="config_path=/home/user/.kube/config" \
				-var="enable_bastion=false" \
				-auto-approve
  1. After the deployment is complete, you will see the output with the Kubernetes cluster details.

Step 8: Deployment Health Check

  1. Since your kubectl config has already been configured by kOps try some commands:
kubectl get nodes -n (your namespace)
kubectl get pods -n (your namespace)

Step 9: Cleaning Up

  1. Before running terraform destroy, you should manually delete the kOps cluster by running:
kops delete cluster --name your-cluster-name --state s3://your-kops-state-store --yes

After waiting a couple of minutes, the last line output should be:

deleted cluster: your-cluster-name
  1. Destroy the Terraform Infrastructure:
terraform destroy -auto-approve