From Code to Cloud: Managing our infras w/ Terraform
Developed with the software and tools below.
This repository contains Terraform templates for automating AWS infrastructure setup using Infrastructure as Code (IaC) principles. IaC involves managing infrastructure via code files instead of manual configurations - ensuring consistency, ease of remediations and repeatability.
With Terraform, these files define AWS resources such as servers, databases, storage etc - making it easier to automate deployment and remediations. The repository is organized for easy customization and is a valuable resource to assist us with streamlining AWS infrastructure management.
Requirements
Ensure you have the following dependencies installed on your system:
- Terraform cli:
- Log into Terraform cloud:
- We store our Terraform State files securely in Terraform Cloud
- To use them locally 3- you need to login by running
terraform login
- Ask team for login credentials
- Log into Terraform cloud:
- Clone the
DevOps
repo:
git clone https://github.com/richcontext/devops.git
- Open/Create new workspace
# example
cd eks_commerce-engine-k8s-cluster
- Initialize the workspace:
terraform init
terraform plan
- Tests are run in the ci pipeline - via Trivy & whenever you commit locally - via
pre-commit
- Install
pre-commit
locally to have it auto-run for each commit you run:brew install pre-commit pre-commit install
-
- We want to develop and test changes locally to minimize builds - when PR is open
- Each commit will run lint-checks & testing
Important: Make sure to have pre-commit installed and running on repo - See
tests
, under theGetting Started
sectiongit pull main git checkout -b <branch-name> cd <workspace_name> # initialize workspace terraform init # confirm changes locally terraform plan
Recommended branch naming convention:
<name initials>/<Jira ticket #>/<feature name>
-
- This will run testing and post proposed changes to the CI Summary
- Any failures in testing will also be posted in the CI Summary
- You can push new commits until it passes
- This will run testing and post proposed changes to the CI Summary
-
- On the PR, comment
terraform apply
- This will trigger the CI to deploy the changes & confirm in them the CI Summary
- If you are deleting/removing a workspace:
- On the PR, comment
terraform destroy
- It will run a destroy operation and confirm via PR comment
- Afterwards, manually delete workspace on Terraform Cloud
- We made this a manual step - in case destroy process is incomplete
- On the PR, comment
- On the PR, comment
-
- After confirming your changes have deployed successfully
- If you have any issues with deployment, feel free to alert the team for assistance
This section provides a breakdown of the featured Terraform workspaces, each designed to address specific infrastructure needs:
Purpose: Manages Elastic Container Registry (ECR) resources w/
for-each
logic looping repo names
- Key Features:
- Automates ECR repository creation
- Implements policies for image tagging and lifecycle management
- Makes it easy to add/remove repos dynamically line-by-line via
repos.txt
file
Purpose: Centralize ingress/egress access via Prefix Lists
- Key Features:
- Facilitates VPC and network routing through prefix lists
- Enhances security by allowing or denying traffic based on defined CIDR/IP ranges
- Can be used to enforce VPN-Private access to critical resources like EKS, RDS, etc
Purpose: Manages user access controls, particularly for GitHub access & Postgres access (via Doppler Secrets manager)
- Key Features:
- Configures Postgres database roles and permissions based on employee ID
- Configures GitHub organization access by groups and roles
- Synchronizes user access between GitHub teams and Postgres roles.
Purpose: Provisions EC2 instances with pre-installation scripts.
- Key Features:
- Creates s3 bucket that will store the custom scripts & tools
- Creates EC2 instances w/ custom setup scripts during instance initialization.
- Ensuring consistent custom environment across deployed instances.
- Logic for both Windows & Linux EC2 instances deployment
Purpose: Deploys an EKS cluster configured for GitOps workflows.
- VPC Creation: Establish a Virtual Private Cloud (VPC) alongside essential networking components necessary for hosting an Amazon Elastic Kubernetes Service (EKS) cluster.
- VPC Peering: Set up peering connections to link the VPC with other VPCs, facilitating access to protected resources like RDS databases.
- Cluster Initialization: Deploy an EKS cluster, complete with node groups and role-based access control (RBAC) permissions.
- Blueprint Deployment: Implement an EKS blueprint that includes AWS console-managed add-ons and service accounts/roles for critical services.
-
GitOps Configuration: Set up GitOps workflows using ArgoCD, deploying manifests for applications and services
-
Custom Kubernetes manifests - not managed by ArgoCD - synced via Terraform and hosted inside the
provisioners
directory. -
Environment Variables: Secret values required by manifests are securely injected via environment variables.
-
- Automated Updates: Utilize GitOps principles to automate configuration updates to the EKS cluster, ensuring seamless and continuous integration and deployment.
Purpose: Manages synchronization between production and staging environments in RDS.
- VPC Creation: Establish a Virtual Private Cloud (VPC) alongside essential networking components necessary for hosting an Amazon Elastic Kubernetes Service (EKS) cluster.
- VPC Peering: Set up peering connections to link the RDS VPC with other VPCs needing access to the private RDS clusters
- Production RDS Setup (
3-rds_prod.tf
): Deploy an Amazon RDS Aurora cluster tailored for production workloads, ensuring high availability, security, and performance. - Staging RDS Setup (
4-rds_staging.tf
): Deploys a duplicated RDS Aurora cluster for staging (via PROD snapshot), allowing for testing and validation before changes are applied to production.
- Via Doppler cli - endpoints are updated in the application secrets upon deployment or endpoint-triggering change