This guide will help set up a Kubernetes cluster on AWS using Terraform, kOps and Helm, and test the deployments with strict frontend access or through an Ingress with the provided URL.
Follow the official kubectl installation guide for your operating system.
Verify the installation:
kubectl version --client
Follow the official AWS CLI installation guide for your operating system. Also configure the AWS CLI with your credentials and ensure they can be accessed by Terraform and kOps:
aws configure
Follow the official kOps installation guide for your operating system.
Verify the installation:
kops version
Follow the official Terraform installation guide for your operating system.
Verify the installation:
terraform version
It is strongly recommended to use an S3 bucket to store the Terraform state file. This ensures that the state file is stored securely and can be accessed by multiple team members. To create an S3 bucket, run the following commands:
aws s3api create-bucket --bucket (your bucket name) --region (your region)
ssh-keygen -t rsa -b 4096 -C "[email protected]"
- Navigate to the Terraform Directory:
cd terraform-files
- Initialize Terraform:
If using the S3 bucket for the Terraform state, run the following command:
terraform init -backend-config="bucket=(your bucket name)" \
-backend-config="key=prod/terraform.tfstate" \
-backend-config="region=(your AWS region)"
Otherwise:
terraform init
- Deploy the Terraform Infrastructure:
Note: please ensure in the following command that all the variables are set correctly, please use full directory paths for the ssh_key_path and config_path variables, the configuration below has been set to default values to show the configuration format.
terraform apply -var="aws_region=ca-central-1" \
-var="az_count=3" \
-var="use_ingress_controller=false" \
-var="environment_name=dev" \
-var="namespace=AWS-Deployment" \
-var="deployment_name=AWS-Deployment" \
-var="vpc_name=vpc-dev" \
-var="ssh_key_path=/home/user/.ssh/id_rsa.pub" \
-var="config_path=/home/user/.kube/config" \
-var="enable_bastion=false" \
-auto-approve
- After the deployment is complete, you will see the output with the Kubernetes cluster details.
- Since your kubectl config has already been configured by kOps try some commands:
kubectl get nodes -n (your namespace)
kubectl get pods -n (your namespace)
- Before running terraform destroy, you should manually delete the kOps cluster by running:
kops delete cluster --name your-cluster-name --state s3://your-kops-state-store --yes
After waiting a couple of minutes, the last line output should be:
deleted cluster: your-cluster-name
- Destroy the Terraform Infrastructure:
terraform destroy -auto-approve