diff --git a/patterns/blue-green-upgrade/README.md b/patterns/blue-green-upgrade/README.md
index b7b51eb261..4c28dde26d 100644
--- a/patterns/blue-green-upgrade/README.md
+++ b/patterns/blue-green-upgrade/README.md
@@ -8,7 +8,7 @@ We are leveraging [the existing EKS Blueprints Workloads GitHub repository sampl
## Table of content
-- [Blue/Green or Canary Amazon EKS clusters migration for stateless ArgoCD workloads](#bluegreen-or-canary-amazon-eks-clusters-migration-for-stateless-argocd-workloads)
+- [Blue/Green Migration](#bluegreen-migration)
- [Table of content](#table-of-content)
- [Project structure](#project-structure)
- [Prerequisites](#prerequisites)
@@ -25,8 +25,6 @@ We are leveraging [the existing EKS Blueprints Workloads GitHub repository sampl
- [Delete the Stack](#delete-the-stack)
- [Delete the EKS Cluster(s)](#delete-the-eks-clusters)
- [TL;DR](#tldr)
- - [Manual](#manual)
- - [Delete the environment stack](#delete-the-environment-stack)
- [Troubleshoot](#troubleshoot)
- [External DNS Ownership](#external-dns-ownership)
- [Check Route 53 Record status](#check-route-53-record-status)
@@ -53,7 +51,11 @@ In the GitOps workload repository, we have configured our applications deploymen
We have configured ExternalDNS add-ons in our two clusters to share the same Route53 Hosted Zone. The workloads in both clusters also share the same Route 53 DNS records, we rely on AWS Route53 weighted records to allow us to configure canary workload migration between our two EKS clusters.
-Here we use the same GitOps workload configuration repository and adapt parameters with the `values.yaml`. We could also use different ArgoCD repository for each cluster, or use a new directory if we want to validate or test new deployment manifests with maybe additional features, configurations or to use with different Kubernetes add-ons (like changing ingress controller).
+We are leveraging the [gitops-bridge-argocd-bootstrap](https://github.com/gitops-bridge-dev/gitops-bridge-argocd-bootstrap-terraform) terraform module that allow us to dynamically provide metadatas from Terraform to ArgoCD deployed in the cluster. For doing this, the module will extract all metadatas from the [terraform-aws-eks-blueprints-addons](https://github.com/aws-ia/terraform-aws-eks-blueprints-addons) module, configured to create all resources except installing the addon's Helm chart. The addon Installation will be delegate to ArgoCD Itself using the [eks-blueprints-add-ons](https://github.com/aws-samples/eks-blueprints-add-ons/tree/main/argocd/bootstrap/control-plane/addons) git repository containing ArgoCD ApplicaitonSets for each supported Addons.
+
+The gitops-bridge will create a secret in the EKS cluster containing all metadatas that will be dynamically used by ArgoCD ApplicationSets at deployment time, so that we can adapt their configuration to our EKS cluster context.
+
+
Our objective here is to show you how Application teams and Platform teams can configure their infrastructure and workloads so that application teams are able to deploy autonomously their workloads to the EKS clusters thanks to ArgoCD, and platform team can keep the control of migrating production workloads from one cluster to another without having to synchronized operations with applications teams, or asking them to build a complicated CD pipeline.
@@ -82,12 +84,13 @@ git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
cd patterns/blue-green-upgrade/
```
-2. Copy the `terraform.tfvars.example` to `terraform.tfvars` on each `environment`, `eks-blue` and `eks-green` folders, and change region, hosted_zone_name, eks_admin_role_name according to your needs.
+2. Copy the `terraform.tfvars.example` to `terraform.tfvars` and symlink it on each `environment`, `eks-blue` and `eks-green` folders, and change region, hosted_zone_name, eks_admin_role_name according to your needs.
```shell
-cp terraform.tfvars.example environment/terraform.tfvars
-cp terraform.tfvars.example eks-blue/terraform.tfvars
-cp terraform.tfvars.example eks-green/terraform.tfvars
+cp terraform.tfvars.example terraform.tfvars
+ln -s ../terraform.tfvars environment/terraform.tfvars
+ln -s ../terraform.tfvars eks-blue/terraform.tfvars
+ln -s ../terraform.tfvars eks-blue/terraform.tfvars
```
- You will need to provide the `hosted_zone_name` for example `my-example.com`. Terraform will create a new hosted zone for the project with name: `${environment}.${hosted_zone_name}` so in our example `eks-blueprint.my-example.com`.
@@ -208,19 +211,19 @@ eks-blueprint-blue
We have configured both our clusters to configure the same [Amazon Route 53](https://aws.amazon.com/fr/route53/) Hosted Zones. This is done by having the same configuration of [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) add-on in `main.tf`:
-This is the Terraform configuration to configure the ExternalDNS Add-on which is deployed by the Blueprint using ArgoCD.
+This is the Terraform configuration to configure the ExternalDNS Add-on which is deployed by the Blueprint using ArgoCD. we specify the Route53 zone that external-dns needs to monitor.
```
enable_external_dns = true
+ external_dns_route53_zone_arns = [data.aws_route53_zone.sub.arn]
+```
+
+we also configure the addons_metadata to provide more configurations to external-dns:
- external_dns_helm_config = {
- txtOwnerId = local.name
- zoneIdFilter = data.aws_route53_zone.sub.zone_id
- policy = "sync"
- awszoneType = "public"
- zonesCacheDuration = "1h"
- logLevel = "debug"
- }
+```
+addons_metadata = merge(
+ ...
+ external_dns_policy = "sync"
```
- We use ExternalDNS in `sync` mode so that the controller can create but also remove DNS records accordingly to service or ingress objects creation.
@@ -349,52 +352,6 @@ Why doing this? When we remove an ingress object, we want the associated Kuberne
../tear-down.sh
```
-#### Manual
-
-1. If also deployed, delete your Karpenter provisioners
-
-this is safe to delete if no addons are deployed on Karpenter, which is the case here.
-If not we should separate the team-platform deployments which installed Karpenter provisioners in a separate ArgoCD Application to avoid any conflicts.
-
-```bash
-kubectl delete provisioners.karpenter.sh --all
-```
-
-2. Delete Workloads App of App
-
-```bash
-kubectl delete application workloads -n argocd
-```
-
-3. If also deployed, delete ecsdemo App of App
-
-```bash
-kubectl delete application ecsdemo -n argocd
-```
-
-Once every workload applications as been freed on AWS side, (this can take some times), we can then destroy our add-ons and terraform resources
-
-> Note: it can take time to deregister all load balancers, verify that you don't have any more AWS resources created by EKS prior to start destroying EKS with terraform.
-
-4. Destroy terraform resources
-
-```bash
-terraform apply -destroy -target="module.eks_cluster.module.kubernetes_addons" -auto-approve
-terraform apply -destroy -target="module.eks_cluster.module.eks" -auto-approve
-terraform apply -destroy -auto-approve
-```
-
-### Delete the environment stack
-
-If you have finish playing with this solution, and once you have destroyed the 2 EKS clusters, you can now delete the environment stack.
-
-```bash
-cd environment
-terraform apply -destroy -auto-approve
-```
-
-This will destroy the Route53 hosted zone, the Certificate manager certificate, the VPC with all it's associated resources.
-
## Troubleshoot
### External DNS Ownership
diff --git a/patterns/blue-green-upgrade/bootstrap/addons.yaml b/patterns/blue-green-upgrade/bootstrap/addons.yaml
new file mode 100644
index 0000000000..f9415677a6
--- /dev/null
+++ b/patterns/blue-green-upgrade/bootstrap/addons.yaml
@@ -0,0 +1,32 @@
+apiVersion: argoproj.io/v1alpha1
+kind: ApplicationSet
+metadata:
+ name: bootstrap-addons
+ namespace: argocd
+spec:
+ syncPolicy:
+ preserveResourcesOnDeletion: true
+ generators:
+ - clusters:
+ selector:
+ matchExpressions:
+ - key: akuity.io/argo-cd-cluster-name
+ operator: NotIn
+ values: [in-cluster]
+ template:
+ metadata:
+ name: 'bootstrap-addons'
+ spec:
+ project: default
+ source:
+ repoURL: '{{metadata.annotations.addons_repo_url}}'
+ path: '{{metadata.annotations.addons_repo_path}}'
+ targetRevision: '{{metadata.annotations.addons_repo_revision}}'
+ directory:
+ recurse: true
+ exclude: exclude/*
+ destination:
+ namespace: 'argocd'
+ name: '{{name}}'
+ syncPolicy:
+ automated: {}
diff --git a/patterns/blue-green-upgrade/bootstrap/workloads.yaml b/patterns/blue-green-upgrade/bootstrap/workloads.yaml
new file mode 100644
index 0000000000..73f2567cc5
--- /dev/null
+++ b/patterns/blue-green-upgrade/bootstrap/workloads.yaml
@@ -0,0 +1,67 @@
+apiVersion: argoproj.io/v1alpha1
+kind: ApplicationSet
+metadata:
+ name: bootstrap-workloads
+ namespace: argocd
+spec:
+ goTemplate: true
+ syncPolicy:
+ preserveResourcesOnDeletion: true
+ generators:
+ - matrix:
+ generators:
+ - clusters:
+ selector:
+ matchExpressions:
+ - key: akuity.io/argo-cd-cluster-name
+ operator: NotIn
+ values:
+ - in-cluster
+ - git:
+ repoURL: '{{.metadata.annotations.gitops_workloads_url}}'
+ revision: '{{.metadata.annotations.gitops_workloads_revision}}'
+ directories:
+ - path: '{{.metadata.annotations.gitops_workloads_path}}/*'
+ template:
+ metadata:
+ name: 'bootstrap-workloads-{{.name}}'
+ spec:
+ project: default
+ sources:
+ - repoURL: '{{.metadata.annotations.gitops_workloads_url}}'
+ targetRevision: '{{.metadata.annotations.gitops_workloads_revision}}'
+ ref: values
+ path: '{{.metadata.annotations.gitops_workloads_path}}'
+ helm:
+ releaseName: 'bootstrap-workloads-{{.name}}'
+ ignoreMissingValueFiles: true
+ values: |
+ "account": "{{.metadata.annotations.aws_account_id}}"
+ "clusterName": "{{.metadata.annotations.cluster_name}}"
+ "labels":
+ "env": "{{.metadata.annotations.env}}"
+ "region": "{{.metadata.annotations.aws_region}}"
+ "repoUrl": "{{.metadata.annotations.gitops_workloads_url}}"
+ "spec":
+ "source":
+ "repoURL": "{{.metadata.annotations.gitops_workloads_url}}"
+ "targetRevision": "{{.metadata.annotations.gitops_workloads_revision}}"
+ "blueprint": "terraform"
+ "clusterName": "{{.metadata.annotations.cluster_name}}"
+ "env": "{{.metadata.annotations.env}}"
+ "ingress":
+ "route53_weight": {{default "0" .metadata.annotations.route53_weight}}
+ "argocd_route53_weight": {{default "0" .metadata.annotations.argocd_route53_weight}}
+ "ecsfrontend_route53_weight": {{default "0" .metadata.annotations.ecsfrontend_route53_weight}}
+ "host": {{ default "" .metadata.annotations.eks_cluster_domain }}
+ "type": "{{.metadata.annotations.ingress_type}}"
+ "karpenterInstanceProfile": "{{.metadata.annotations.karpenter_node_instance_profile_name}}"
+ "target_group_arn": {{ default "" .metadata.annotations.target_group_arn }}
+ "external_lb_url": {{ if index .metadata.annotations "external_lb_dns" }} http://{{ .metadata.annotations.external_lb_dns }}{{ else }}{{ end }}
+ destination:
+ name: '{{.name}}'
+ syncPolicy:
+ automated: {}
+ syncOptions:
+ - CreateNamespace=true
+ - ServerSideApply=true # Big CRDs.
diff --git a/patterns/blue-green-upgrade/eks-blue/main.tf b/patterns/blue-green-upgrade/eks-blue/main.tf
index d13f2c4e16..b04dee0585 100644
--- a/patterns/blue-green-upgrade/eks-blue/main.tf
+++ b/patterns/blue-green-upgrade/eks-blue/main.tf
@@ -26,39 +26,34 @@ provider "helm" {
}
}
-provider "kubectl" {
- apply_retry_count = 10
- host = module.eks_cluster.eks_cluster_endpoint
- cluster_ca_certificate = base64decode(module.eks_cluster.cluster_certificate_authority_data)
- load_config_file = false
-
- exec {
- api_version = "client.authentication.k8s.io/v1beta1"
- command = "aws"
- args = ["eks", "get-token", "--cluster-name", module.eks_cluster.eks_cluster_id]
- }
-}
module "eks_cluster" {
source = "../modules/eks_cluster"
aws_region = var.aws_region
service_name = "blue"
- cluster_version = "1.25"
+ cluster_version = "1.26"
argocd_route53_weight = "100"
route53_weight = "100"
ecsfrontend_route53_weight = "100"
- environment_name = var.environment_name
- hosted_zone_name = var.hosted_zone_name
- eks_admin_role_name = var.eks_admin_role_name
- workload_repo_url = var.workload_repo_url
- workload_repo_secret = var.workload_repo_secret
- workload_repo_revision = var.workload_repo_revision
- workload_repo_path = var.workload_repo_path
+ environment_name = var.environment_name
+ hosted_zone_name = var.hosted_zone_name
+ eks_admin_role_name = var.eks_admin_role_name
+
+ aws_secret_manager_git_private_ssh_key_name = var.aws_secret_manager_git_private_ssh_key_name
+ argocd_secret_manager_name_suffix = var.argocd_secret_manager_name_suffix
+ ingress_type = var.ingress_type
+
+ gitops_addons_org = var.gitops_addons_org
+ gitops_addons_repo = var.gitops_addons_repo
+ gitops_addons_basepath = var.gitops_addons_basepath
+ gitops_addons_path = var.gitops_addons_path
+ gitops_addons_revision = var.gitops_addons_revision
- addons_repo_url = var.addons_repo_url
+ gitops_workloads_org = var.gitops_workloads_org
+ gitops_workloads_repo = var.gitops_workloads_repo
+ gitops_workloads_revision = var.gitops_workloads_revision
+ gitops_workloads_path = var.gitops_workloads_path
- iam_platform_user = var.iam_platform_user
- argocd_secret_manager_name_suffix = var.argocd_secret_manager_name_suffix
}
diff --git a/patterns/blue-green-upgrade/eks-blue/outputs.tf b/patterns/blue-green-upgrade/eks-blue/outputs.tf
index 7e166f24e2..06ac616086 100644
--- a/patterns/blue-green-upgrade/eks-blue/outputs.tf
+++ b/patterns/blue-green-upgrade/eks-blue/outputs.tf
@@ -3,13 +3,18 @@ output "eks_cluster_id" {
value = module.eks_cluster.eks_cluster_id
}
+output "configure_kubectl" {
+ description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
+ value = module.eks_cluster.configure_kubectl
+}
+
output "eks_blueprints_platform_teams_configure_kubectl" {
- description = "Configure kubectl for each Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
+ description = "Configure kubectl for Platform Team: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.eks_blueprints_platform_teams_configure_kubectl
}
output "eks_blueprints_dev_teams_configure_kubectl" {
- description = "Configure kubectl for each Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
+ description = "Configure kubectl for each Dev Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.eks_blueprints_dev_teams_configure_kubectl
}
@@ -17,3 +22,14 @@ output "eks_blueprints_ecsdemo_teams_configure_kubectl" {
description = "Configure kubectl for each Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.eks_blueprints_ecsdemo_teams_configure_kubectl
}
+
+output "access_argocd" {
+ description = "ArgoCD Access"
+ value = module.eks_cluster.access_argocd
+}
+
+output "gitops_metadata" {
+ description = "export gitops_metadata"
+ value = module.eks_cluster.gitops_metadata
+ sensitive = true
+}
diff --git a/patterns/blue-green-upgrade/eks-blue/providers.tf b/patterns/blue-green-upgrade/eks-blue/providers.tf
index 68943de818..fac76269c2 100644
--- a/patterns/blue-green-upgrade/eks-blue/providers.tf
+++ b/patterns/blue-green-upgrade/eks-blue/providers.tf
@@ -1,5 +1,5 @@
terraform {
- required_version = ">= 1.0.1"
+ required_version = ">= 1.4.0"
required_providers {
aws = {
diff --git a/patterns/blue-green-upgrade/eks-blue/variables.tf b/patterns/blue-green-upgrade/eks-blue/variables.tf
index 98e8976dff..77416b8ed8 100644
--- a/patterns/blue-green-upgrade/eks-blue/variables.tf
+++ b/patterns/blue-green-upgrade/eks-blue/variables.tf
@@ -5,11 +5,17 @@ variable "aws_region" {
}
variable "environment_name" {
- description = "The name of Environment Infrastructure stack name, feel free to rename it. Used for cluster and VPC names."
+ description = "The name of Environment Infrastructure stack, feel free to rename it. Used for cluster and VPC names."
type = string
default = "eks-blueprint"
}
+variable "ingress_type" {
+ type = string
+ description = "Type of ingress to uses (alb | nginx | ...). this parameter will be sent to arocd via gitops bridge"
+ default = "alb"
+}
+
variable "hosted_zone_name" {
type = string
description = "Route53 domain for the cluster."
@@ -22,44 +28,64 @@ variable "eks_admin_role_name" {
default = ""
}
-variable "workload_repo_url" {
+variable "aws_secret_manager_git_private_ssh_key_name" {
type = string
- description = "Git repo URL for the ArgoCD workload deployment"
- default = "https://github.com/aws-samples/eks-blueprints-workloads.git"
+ description = "Secret Manager secret name for hosting Github SSH-Key to Access private repository"
+ default = "github-blueprint-ssh-key"
}
-variable "workload_repo_secret" {
+variable "argocd_secret_manager_name_suffix" {
type = string
- description = "Secret Manager secret name for hosting Github SSH-Key to Access private repository"
- default = "github-blueprint-ssh-key"
+ description = "Name of secret manager secret for ArgoCD Admin UI Password"
+ default = "argocd-admin-secret"
}
-variable "workload_repo_revision" {
+variable "gitops_addons_org" {
type = string
- description = "Git repo revision in workload_repo_url for the ArgoCD workload deployment"
- default = "main"
+ description = "Git repository org/user contains for addons"
+ default = "git@github.com:aws-samples"
+}
+variable "gitops_addons_repo" {
+ type = string
+ description = "Git repository contains for addons"
+ default = "eks-blueprints-add-ons"
+}
+variable "gitops_addons_basepath" {
+ type = string
+ description = "Git repository base path for addons"
+ default = "argocd/"
+}
+variable "gitops_addons_path" {
+ type = string
+ description = "Git repository path for addons"
+ default = "argocd/bootstrap/control-plane/addons"
+}
+variable "gitops_addons_revision" {
+ type = string
+ description = "Git repository revision/branch/ref for addons"
+ default = "HEAD"
}
-variable "workload_repo_path" {
+variable "gitops_workloads_org" {
type = string
- description = "Git repo path in workload_repo_url for the ArgoCD workload deployment"
- default = "envs/dev"
+ description = "Git repository org/user contains for workloads"
+ default = "git@github.com:aws-samples"
}
-variable "addons_repo_url" {
+variable "gitops_workloads_repo" {
type = string
- description = "Git repo URL for the ArgoCD addons deployment"
- default = "https://github.com/aws-samples/eks-blueprints-add-ons.git"
+ description = "Git repository contains for workloads"
+ default = "eks-blueprints-workloads"
}
-variable "iam_platform_user" {
+variable "gitops_workloads_path" {
type = string
- description = "IAM user used as platform-user"
- default = ""
+ description = "Git repo path in workload_repo_url for the ArgoCD workload deployment"
+ default = "envs/dev"
}
-variable "argocd_secret_manager_name_suffix" {
+variable "gitops_workloads_revision" {
type = string
- description = "Name of secret manager secret for ArgoCD Admin UI Password"
- default = "argocd-admin-secret"
+ description = "Git repo revision in workload_repo_url for the ArgoCD workload deployment"
+ default = "main"
}
diff --git a/patterns/blue-green-upgrade/eks-green/main.tf b/patterns/blue-green-upgrade/eks-green/main.tf
index 7d4e0c900b..37fddfad3e 100644
--- a/patterns/blue-green-upgrade/eks-green/main.tf
+++ b/patterns/blue-green-upgrade/eks-green/main.tf
@@ -26,40 +26,34 @@ provider "helm" {
}
}
-provider "kubectl" {
- apply_retry_count = 10
- host = module.eks_cluster.eks_cluster_endpoint
- cluster_ca_certificate = base64decode(module.eks_cluster.cluster_certificate_authority_data)
- load_config_file = false
-
- exec {
- api_version = "client.authentication.k8s.io/v1beta1"
- command = "aws"
- args = ["eks", "get-token", "--cluster-name", module.eks_cluster.eks_cluster_id]
- }
-}
-
module "eks_cluster" {
source = "../modules/eks_cluster"
aws_region = var.aws_region
service_name = "green"
- cluster_version = "1.26" # Here, we deploy the cluster with the N+1 Kubernetes Version
+ cluster_version = "1.27" # Here, we deploy the cluster with the N+1 Kubernetes Version
argocd_route53_weight = "0" # We control with theses parameters how we send traffic to the workloads in the new cluster
route53_weight = "0"
ecsfrontend_route53_weight = "0"
- environment_name = var.environment_name
- hosted_zone_name = var.hosted_zone_name
- eks_admin_role_name = var.eks_admin_role_name
- workload_repo_url = var.workload_repo_url
- workload_repo_secret = var.workload_repo_secret
- workload_repo_revision = var.workload_repo_revision
- workload_repo_path = var.workload_repo_path
+ environment_name = var.environment_name
+ hosted_zone_name = var.hosted_zone_name
+ eks_admin_role_name = var.eks_admin_role_name
+
+ aws_secret_manager_git_private_ssh_key_name = var.aws_secret_manager_git_private_ssh_key_name
+ argocd_secret_manager_name_suffix = var.argocd_secret_manager_name_suffix
+ ingress_type = var.ingress_type
+
+ gitops_addons_org = var.gitops_addons_org
+ gitops_addons_repo = var.gitops_addons_repo
+ gitops_addons_basepath = var.gitops_addons_basepath
+ gitops_addons_path = var.gitops_addons_path
+ gitops_addons_revision = var.gitops_addons_revision
- addons_repo_url = var.addons_repo_url
+ gitops_workloads_org = var.gitops_workloads_org
+ gitops_workloads_repo = var.gitops_workloads_repo
+ gitops_workloads_revision = var.gitops_workloads_revision
+ gitops_workloads_path = var.gitops_workloads_path
- iam_platform_user = var.iam_platform_user
- argocd_secret_manager_name_suffix = var.argocd_secret_manager_name_suffix
}
diff --git a/patterns/blue-green-upgrade/eks-green/outputs.tf b/patterns/blue-green-upgrade/eks-green/outputs.tf
index 210da14f30..06ac616086 100644
--- a/patterns/blue-green-upgrade/eks-green/outputs.tf
+++ b/patterns/blue-green-upgrade/eks-green/outputs.tf
@@ -3,8 +3,13 @@ output "eks_cluster_id" {
value = module.eks_cluster.eks_cluster_id
}
+output "configure_kubectl" {
+ description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
+ value = module.eks_cluster.configure_kubectl
+}
+
output "eks_blueprints_platform_teams_configure_kubectl" {
- description = "Configure kubectl Platform Team: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
+ description = "Configure kubectl for Platform Team: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.eks_blueprints_platform_teams_configure_kubectl
}
@@ -14,6 +19,17 @@ output "eks_blueprints_dev_teams_configure_kubectl" {
}
output "eks_blueprints_ecsdemo_teams_configure_kubectl" {
- description = "Configure kubectl for each ECSDEMO Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
+ description = "Configure kubectl for each Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.eks_blueprints_ecsdemo_teams_configure_kubectl
}
+
+output "access_argocd" {
+ description = "ArgoCD Access"
+ value = module.eks_cluster.access_argocd
+}
+
+output "gitops_metadata" {
+ description = "export gitops_metadata"
+ value = module.eks_cluster.gitops_metadata
+ sensitive = true
+}
diff --git a/patterns/blue-green-upgrade/eks-green/providers.tf b/patterns/blue-green-upgrade/eks-green/providers.tf
index 30c08a8dfc..fac76269c2 100644
--- a/patterns/blue-green-upgrade/eks-green/providers.tf
+++ b/patterns/blue-green-upgrade/eks-green/providers.tf
@@ -1,5 +1,5 @@
terraform {
- required_version = ">= 1.4"
+ required_version = ">= 1.4.0"
required_providers {
aws = {
diff --git a/patterns/blue-green-upgrade/eks-green/variables.tf b/patterns/blue-green-upgrade/eks-green/variables.tf
index 98e8976dff..77416b8ed8 100644
--- a/patterns/blue-green-upgrade/eks-green/variables.tf
+++ b/patterns/blue-green-upgrade/eks-green/variables.tf
@@ -5,11 +5,17 @@ variable "aws_region" {
}
variable "environment_name" {
- description = "The name of Environment Infrastructure stack name, feel free to rename it. Used for cluster and VPC names."
+ description = "The name of Environment Infrastructure stack, feel free to rename it. Used for cluster and VPC names."
type = string
default = "eks-blueprint"
}
+variable "ingress_type" {
+ type = string
+ description = "Type of ingress to uses (alb | nginx | ...). this parameter will be sent to arocd via gitops bridge"
+ default = "alb"
+}
+
variable "hosted_zone_name" {
type = string
description = "Route53 domain for the cluster."
@@ -22,44 +28,64 @@ variable "eks_admin_role_name" {
default = ""
}
-variable "workload_repo_url" {
+variable "aws_secret_manager_git_private_ssh_key_name" {
type = string
- description = "Git repo URL for the ArgoCD workload deployment"
- default = "https://github.com/aws-samples/eks-blueprints-workloads.git"
+ description = "Secret Manager secret name for hosting Github SSH-Key to Access private repository"
+ default = "github-blueprint-ssh-key"
}
-variable "workload_repo_secret" {
+variable "argocd_secret_manager_name_suffix" {
type = string
- description = "Secret Manager secret name for hosting Github SSH-Key to Access private repository"
- default = "github-blueprint-ssh-key"
+ description = "Name of secret manager secret for ArgoCD Admin UI Password"
+ default = "argocd-admin-secret"
}
-variable "workload_repo_revision" {
+variable "gitops_addons_org" {
type = string
- description = "Git repo revision in workload_repo_url for the ArgoCD workload deployment"
- default = "main"
+ description = "Git repository org/user contains for addons"
+ default = "git@github.com:aws-samples"
+}
+variable "gitops_addons_repo" {
+ type = string
+ description = "Git repository contains for addons"
+ default = "eks-blueprints-add-ons"
+}
+variable "gitops_addons_basepath" {
+ type = string
+ description = "Git repository base path for addons"
+ default = "argocd/"
+}
+variable "gitops_addons_path" {
+ type = string
+ description = "Git repository path for addons"
+ default = "argocd/bootstrap/control-plane/addons"
+}
+variable "gitops_addons_revision" {
+ type = string
+ description = "Git repository revision/branch/ref for addons"
+ default = "HEAD"
}
-variable "workload_repo_path" {
+variable "gitops_workloads_org" {
type = string
- description = "Git repo path in workload_repo_url for the ArgoCD workload deployment"
- default = "envs/dev"
+ description = "Git repository org/user contains for workloads"
+ default = "git@github.com:aws-samples"
}
-variable "addons_repo_url" {
+variable "gitops_workloads_repo" {
type = string
- description = "Git repo URL for the ArgoCD addons deployment"
- default = "https://github.com/aws-samples/eks-blueprints-add-ons.git"
+ description = "Git repository contains for workloads"
+ default = "eks-blueprints-workloads"
}
-variable "iam_platform_user" {
+variable "gitops_workloads_path" {
type = string
- description = "IAM user used as platform-user"
- default = ""
+ description = "Git repo path in workload_repo_url for the ArgoCD workload deployment"
+ default = "envs/dev"
}
-variable "argocd_secret_manager_name_suffix" {
+variable "gitops_workloads_revision" {
type = string
- description = "Name of secret manager secret for ArgoCD Admin UI Password"
- default = "argocd-admin-secret"
+ description = "Git repo revision in workload_repo_url for the ArgoCD workload deployment"
+ default = "main"
}
diff --git a/patterns/blue-green-upgrade/environment/main.tf b/patterns/blue-green-upgrade/environment/main.tf
index cdf3503a36..ee29c7803d 100644
--- a/patterns/blue-green-upgrade/environment/main.tf
+++ b/patterns/blue-green-upgrade/environment/main.tf
@@ -12,6 +12,8 @@ locals {
argocd_secret_manager_name = var.argocd_secret_manager_name_suffix
+ hosted_zone_name = var.hosted_zone_name
+
tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
@@ -47,18 +49,18 @@ module "vpc" {
# Retrieve existing root hosted zone
data "aws_route53_zone" "root" {
- name = var.hosted_zone_name
+ name = local.hosted_zone_name
}
# Create Sub HostedZone four our deployment
resource "aws_route53_zone" "sub" {
- name = "${local.name}.${var.hosted_zone_name}"
+ name = "${local.name}.${local.hosted_zone_name}"
}
# Validate records for the new HostedZone
resource "aws_route53_record" "ns" {
zone_id = data.aws_route53_zone.root.zone_id
- name = "${local.name}.${var.hosted_zone_name}"
+ name = "${local.name}.${local.hosted_zone_name}"
type = "NS"
ttl = "30"
records = aws_route53_zone.sub.name_servers
@@ -68,17 +70,17 @@ module "acm" {
source = "terraform-aws-modules/acm/aws"
version = "~> 4.0"
- domain_name = "${local.name}.${var.hosted_zone_name}"
+ domain_name = "${local.name}.${local.hosted_zone_name}"
zone_id = aws_route53_zone.sub.zone_id
subject_alternative_names = [
- "*.${local.name}.${var.hosted_zone_name}"
+ "*.${local.name}.${local.hosted_zone_name}"
]
wait_for_validation = true
tags = {
- Name = "${local.name}.${var.hosted_zone_name}"
+ Name = "${local.name}.${local.hosted_zone_name}"
}
}
diff --git a/patterns/blue-green-upgrade/modules/eks_cluster/README.md b/patterns/blue-green-upgrade/modules/eks_cluster/README.md
index 23b10bc527..1148fdf654 100644
--- a/patterns/blue-green-upgrade/modules/eks_cluster/README.md
+++ b/patterns/blue-green-upgrade/modules/eks_cluster/README.md
@@ -41,7 +41,7 @@ The AWS resources created by the script are detailed bellow:
- Kube Proxy
- VPC CNI
- EBS CSI Driver
- - Kubernetes addon deployed half with terraform and half with dedicated [ArgoCD addon repo](https://github.com/aws-samples/eks-blueprints-add-ons)
+ - Kubernetes addon deployed half with terraform and half with dedicated [ArgoCD addon repo](https://github.com/aws-samples/eks-blueprints-add-ons/tree/main/argocd/bootstrap/control-plane/addons)
- Metrics server
- Vertical Pod Autoscaler
- Aws Load Balancer Controller
@@ -51,9 +51,12 @@ The AWS resources created by the script are detailed bellow:
- AWS for FluentBit
- AWS CloudWatch Metrics
- Kubecost
- - Kubernetes workloads (defined in a dedicated github repository repository)
+ - Kubernetes workloads (defined in a dedicated [github repository repository](https://github.com/aws-samples/eks-blueprints-workloads/tree/main/envs/dev))
+ - team-platform (create Karpenter profiles)
- team-burnham
- burnham-ingress configured with weighted target groups
+ - burnham app deployed on Karpenter nodes
+ - ...
## Infrastructure Architecture
@@ -69,7 +72,7 @@ The following diagram represents the Infrastructure architecture being deployed
- A public AWS Route 53 Hosted Zone that will be used to create our project hosted zone. It will be provided wviathe Terraform variable `"hosted_zone_name`
- Before moving to the next step, you will need to register a parent domain with AWS Route 53 (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register.html) in case you don’t have one created yet.
- Accessing GitOps Private git repositories with SSH access requiring an SSH key for authentication. In this example our workloads repositories are stored in GitHub, you can see in GitHub documentation on how to [connect with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh).
- - The private ssh key value are supposed to be stored in AWS Secret Manager, by default in a secret named `github-blueprint-ssh-key`, but you can change it using the terraform variable `workload_repo_secret`
+ - The private ssh key value are supposed to be stored in AWS Secret Manager, by default in a secret named `github-blueprint-ssh-key`, but you can change it using the terraform variable `aws_secret_manager_git_private_ssh_key_name`
## Usage
@@ -81,7 +84,7 @@ terraform init
**2.** Create your SSH Key in Secret Manager
-Retrieve the ArgoUI password
+Once the secret created you should be able to retrieve it using:
```bash
aws secretsmanager get-secret-value \
@@ -122,7 +125,7 @@ Connect to the ArgoUI endpoint:
echo -n "https://"; kubectl get svc -n argocd argo-cd-argocd-server -o json | jq ".status.loadBalancer.ingress[0].hostname" -r
```
-Validate the certificate issue, and login with credentials admin /
+Validate the certificate issue, and login with credentials **admin / **
**5.** Control Access to the Burnham ingress
@@ -133,4 +136,4 @@ curl -s $URL | grep CLUSTER_NAME | awk -F "|" '{print $4}'
## Cleanup
-See Cleanup section in main Readme.md
+See Cleanup section in main [Readme.md](../../README.md)
diff --git a/patterns/blue-green-upgrade/modules/eks_cluster/main.tf b/patterns/blue-green-upgrade/modules/eks_cluster/main.tf
index 6bc975ff7a..693ee92c5f 100644
--- a/patterns/blue-green-upgrade/modules/eks_cluster/main.tf
+++ b/patterns/blue-green-upgrade/modules/eks_cluster/main.tf
@@ -7,33 +7,27 @@ provider "aws" {
locals {
environment = var.environment_name
service = var.service_name
+ region = var.aws_region
- env = local.environment
+ env = local.service
name = "${local.environment}-${local.service}"
-
# Mapping
- hosted_zone_name = var.hosted_zone_name
- addons_repo_url = var.addons_repo_url
- workload_repo_secret = var.workload_repo_secret
- cluster_version = var.cluster_version
- argocd_secret_manager_name = var.argocd_secret_manager_name_suffix
- workload_repo_path = var.workload_repo_path
- workload_repo_url = var.workload_repo_url
- workload_repo_revision = var.workload_repo_revision
- eks_admin_role_name = var.eks_admin_role_name
- iam_platform_user = var.iam_platform_user
-
- metrics_server = true
- aws_load_balancer_controller = true
- karpenter = true
- aws_for_fluentbit = true
- cert_manager = true
- cloudwatch_metrics = true
- external_dns = true
- vpa = true
- kubecost = true
- argo_rollouts = true
+ hosted_zone_name = var.hosted_zone_name
+ ingress_type = var.ingress_type
+ aws_secret_manager_git_private_ssh_key_name = var.aws_secret_manager_git_private_ssh_key_name
+ cluster_version = var.cluster_version
+ argocd_secret_manager_name = var.argocd_secret_manager_name_suffix
+ eks_admin_role_name = var.eks_admin_role_name
+
+ gitops_workloads_url = "${var.gitops_workloads_org}/${var.gitops_workloads_repo}"
+ gitops_workloads_path = var.gitops_workloads_path
+ gitops_workloads_revision = var.gitops_workloads_revision
+
+ gitops_addons_url = "${var.gitops_addons_org}/${var.gitops_addons_repo}"
+ gitops_addons_basepath = var.gitops_addons_basepath
+ gitops_addons_path = var.gitops_addons_path
+ gitops_addons_revision = var.gitops_addons_revision
# Route 53 Ingress Weights
argocd_route53_weight = var.argocd_route53_weight
@@ -48,212 +42,100 @@ locals {
node_group_name = "managed-ondemand"
-
#---------------------------------------------------------------
# ARGOCD ADD-ON APPLICATION
#---------------------------------------------------------------
- #At this time (with new v5 addon repository), the Addons need to be managed by Terrform and not ArgoCD
- addons_application = {
- path = "chart"
- repo_url = local.addons_repo_url
- ssh_key_secret_name = local.workload_repo_secret
- add_on_application = true
+ aws_addons = {
+ enable_cert_manager = true
+ enable_aws_ebs_csi_resources = true # generate gp2 and gp3 storage classes for ebs-csi
+ #enable_aws_efs_csi_driver = true
+ #enable_aws_fsx_csi_driver = true
+ enable_aws_cloudwatch_metrics = true
+ #enable_aws_privateca_issuer = true
+ #enable_cluster_autoscaler = true
+ enable_external_dns = true
+ enable_external_secrets = true
+ enable_aws_load_balancer_controller = true
+ #enable_fargate_fluentbit = true
+ enable_aws_for_fluentbit = true
+ #enable_aws_node_termination_handler = true
+ enable_karpenter = true
+ #enable_velero = true
+ #enable_aws_gateway_api_controller = true
+ #enable_aws_secrets_store_csi_driver_provider = true
}
+ oss_addons = {
+ #enable_argo_rollouts = true
+ #enable_argo_workflows = true
+ #enable_cluster_proportional_autoscaler = true
+ #enable_gatekeeper = true
+ #enable_gpu_operator = true
+ enable_ingress_nginx = true
+ enable_kyverno = true
+ #enable_kube_prometheus_stack = true
+ enable_metrics_server = true
+ #enable_prometheus_adapter = true
+ #enable_secrets_store_csi_driver = true
+ #enable_vpa = true
+ #enable_foo = true # you can add any addon here, make sure to update the gitops repo with the corresponding application set
+ }
+ addons = merge(local.aws_addons, local.oss_addons, { kubernetes_version = local.cluster_version })
- #---------------------------------------------------------------
- # ARGOCD WORKLOAD APPLICATION
- #---------------------------------------------------------------
+ #----------------------------------------------------------------
+ # GitOps Bridge, define metadatas to pass from Terraform to ArgoCD
+ #----------------------------------------------------------------
- workload_application = {
- path = local.workload_repo_path # <-- we could also to blue/green on the workload repo path like: envs/dev-blue / envs/dev-green
- repo_url = local.workload_repo_url
- target_revision = local.workload_repo_revision
- ssh_key_secret_name = local.workload_repo_secret
- add_on_application = false
- values = {
- labels = {
- env = local.env
- myapp = "myvalue"
- }
- spec = {
- source = {
- repoURL = local.workload_repo_url
- targetRevision = local.workload_repo_revision
- }
- blueprint = "terraform"
- clusterName = local.name
- karpenterInstanceProfile = module.karpenter.instance_profile_name
- env = local.env
- ingress = {
- type = "alb"
- host = local.eks_cluster_domain
- route53_weight = local.route53_weight # <-- You can control the weight of the route53 weighted records between clusters
- argocd_route53_weight = local.argocd_route53_weight
- }
- }
+ addons_metadata = merge(
+ try(module.eks_blueprints_addons.gitops_metadata, {}), # eks blueprints addons automatically expose metadatas
+ {
+ aws_cluster_name = module.eks.cluster_name
+ aws_region = local.region
+ aws_account_id = data.aws_caller_identity.current.account_id
+ aws_vpc_id = data.aws_vpc.vpc.id
+ cluster_endpoint = try(module.eks.cluster_endpoint, {})
+ env = local.env
+ },
+ {
+ argocd_password = bcrypt(data.aws_secretsmanager_secret_version.admin_password_version.secret_string)
+ aws_secret_manager_git_private_ssh_key_name = local.aws_secret_manager_git_private_ssh_key_name
+
+ gitops_workloads_url = local.gitops_workloads_url
+ gitops_workloads_path = local.gitops_workloads_path
+ gitops_workloads_revision = local.gitops_workloads_revision
+
+ addons_repo_url = local.gitops_addons_url
+ addons_repo_basepath = local.gitops_addons_basepath
+ addons_repo_path = local.gitops_addons_path
+ addons_repo_revision = local.gitops_addons_revision
+ },
+ {
+ eks_cluster_domain = local.eks_cluster_domain
+ external_dns_policy = "sync"
+ ingress_type = local.ingress_type
+ argocd_route53_weight = local.argocd_route53_weight
+ route53_weight = local.route53_weight
+ ecsfrontend_route53_weight = local.ecsfrontend_route53_weight
+ #target_group_arn = local.service == "blue" ? data.aws_lb_target_group.tg_blue.arn : data.aws_lb_target_group.tg_green.arn # <-- Add this line
+ # external_lb_dns = data.aws_lb.alb.dns_name
}
- }
+ )
#---------------------------------------------------------------
- # ARGOCD ECSDEMO APPLICATION
+ # Manifests for bootstraping the cluster for addons & workloads
#---------------------------------------------------------------
- ecsdemo_application = {
- path = "multi-repo/argo-app-of-apps/dev"
- repo_url = local.workload_repo_url
- target_revision = local.workload_repo_revision
- ssh_key_secret_name = local.workload_repo_secret
- add_on_application = false
- values = {
- spec = {
- blueprint = "terraform"
- clusterName = local.name
- karpenterInstanceProfile = module.karpenter.instance_profile_name
-
- apps = {
- ecsdemoNodejs = {
-
- helm = {
- replicaCount = "9"
- nodeSelector = {
- "karpenter.sh/provisioner-name" = "default"
- }
- tolerations = [
- {
- key = "karpenter"
- operator = "Exists"
- effect = "NoSchedule"
- }
- ]
- topologyAwareHints = "true"
- topologySpreadConstraints = [
- {
- maxSkew = 1
- topologyKey = "topology.kubernetes.io/zone"
- whenUnsatisfiable = "DoNotSchedule"
- labelSelector = {
- matchLabels = {
- "app.kubernetes.io/name" = "ecsdemo-nodejs"
- }
- }
- }
- ]
- }
- }
-
- ecsdemoCrystal = {
-
- helm = {
- replicaCount = "9"
- nodeSelector = {
- "karpenter.sh/provisioner-name" = "default"
- }
- tolerations = [
- {
- key = "karpenter"
- operator = "Exists"
- effect = "NoSchedule"
- }
- ]
- topologyAwareHints = "true"
- topologySpreadConstraints = [
- {
- maxSkew = 1
- topologyKey = "topology.kubernetes.io/zone"
- whenUnsatisfiable = "DoNotSchedule"
- labelSelector = {
- matchLabels = {
- "app.kubernetes.io/name" = "ecsdemo-crystal"
- }
- }
- }
- ]
- }
- }
-
-
- ecsdemoFrontend = {
- repoURL = "https://github.com/allamand/ecsdemo-frontend"
- targetRevision = "main"
- helm = {
- image = {
- repository = "public.ecr.aws/seb-demo/ecsdemo-frontend"
- tag = "latest"
- }
- ingress = {
- enabled = "true"
- className = "alb"
- annotations = {
- "alb.ingress.kubernetes.io/scheme" = "internet-facing"
- "alb.ingress.kubernetes.io/group.name" = "ecsdemo"
- "alb.ingress.kubernetes.io/listen-ports" = "[{\\\"HTTPS\\\": 443}]"
- "alb.ingress.kubernetes.io/ssl-redirect" = "443"
- "alb.ingress.kubernetes.io/target-type" = "ip"
- "external-dns.alpha.kubernetes.io/set-identifier" = local.name
- "external-dns.alpha.kubernetes.io/aws-weight" = local.ecsfrontend_route53_weight
- }
- hosts = [
- {
- host = "frontend.${local.eks_cluster_domain}"
- paths = [
- {
- path = "/"
- pathType = "Prefix"
- }
- ]
- }
- ]
- }
- resources = {
- requests = {
- cpu = "1"
- memory = "256Mi"
- }
- limits = {
- cpu = "1"
- memory = "512Mi"
- }
- }
- autoscaling = {
- enabled = "true"
- minReplicas = "9"
- maxReplicas = "100"
- targetCPUUtilizationPercentage = "60"
- }
- nodeSelector = {
- "karpenter.sh/provisioner-name" = "default"
- }
- tolerations = [
- {
- key = "karpenter"
- operator = "Exists"
- effect = "NoSchedule"
- }
- ]
- topologySpreadConstraints = [
- {
- maxSkew = 1
- topologyKey = "topology.kubernetes.io/zone"
- whenUnsatisfiable = "DoNotSchedule"
- labelSelector = {
- matchLabels = {
- "app.kubernetes.io/name" = "ecsdemo-frontend"
- }
- }
- }
- ]
- }
- }
- }
- }
- }
+ argocd_apps = {
+ addons = file("${path.module}/../../bootstrap/addons.yaml")
+ workloads = file("${path.module}/../../bootstrap/workloads.yaml")
}
+
tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
+
}
# Find the user currently in use by AWS
@@ -296,11 +178,14 @@ resource "aws_ec2_tag" "public_subnets" {
value = "shared"
}
-# Create Sub HostedZone four our deployment
+# Get HostedZone four our deployment
data "aws_route53_zone" "sub" {
name = "${local.environment}.${local.hosted_zone_name}"
}
+################################################################################
+# AWS Secret Manager for argocd password
+################################################################################
data "aws_secretsmanager_secret" "argocd" {
name = "${local.argocd_secret_manager_name}.${local.environment}"
@@ -310,9 +195,13 @@ data "aws_secretsmanager_secret_version" "admin_password_version" {
secret_id = data.aws_secretsmanager_secret.argocd.id
}
+################################################################################
+# EKS Cluster
+################################################################################
+#tfsec:ignore:aws-eks-enable-control-plane-logging
module "eks" {
source = "terraform-aws-modules/eks/aws"
- version = "~> 19.16"
+ version = "~> 19.15.2"
cluster_name = local.name
cluster_version = local.cluster_version
@@ -341,7 +230,7 @@ module "eks" {
[
module.eks_blueprints_platform_teams.aws_auth_configmap_role,
{
- rolearn = module.karpenter.role_arn
+ rolearn = module.eks_blueprints_addons.karpenter.node_iam_role_arn
username = "system:node:{{EC2PrivateDNSName}}"
groups = [
"system:bootstrappers",
@@ -364,16 +253,14 @@ module "eks" {
})
}
-data "aws_iam_user" "platform_user" {
- count = local.iam_platform_user != "" ? 1 : 0
- user_name = local.iam_platform_user
-}
-
data "aws_iam_role" "eks_admin_role_name" {
count = local.eks_admin_role_name != "" ? 1 : 0
name = local.eks_admin_role_name
}
+################################################################################
+# EKS Blueprints Teams
+################################################################################
module "eks_blueprints_platform_teams" {
source = "aws-ia/eks-blueprints-teams/aws"
version = "~> 1.0"
@@ -386,7 +273,6 @@ module "eks_blueprints_platform_teams" {
# Define who can impersonate the team-platform Role
users = [
data.aws_caller_identity.current.arn,
- try(data.aws_iam_user.platform_user[0].arn, data.aws_caller_identity.current.arn),
try(data.aws_iam_role.eks_admin_role_name[0].arn, data.aws_caller_identity.current.arn),
]
cluster_arn = module.eks.cluster_arn
@@ -396,6 +282,7 @@ module "eks_blueprints_platform_teams" {
"elbv2.k8s.aws/pod-readiness-gate-inject" = "enabled",
"appName" = "platform-team-app",
"projectName" = "project-platform",
+ #"pod-security.kubernetes.io/enforce" = "restricted",
}
annotations = {
@@ -438,6 +325,7 @@ module "eks_blueprints_platform_teams" {
}
]
}
+
}
}
@@ -455,6 +343,7 @@ module "eks_blueprints_dev_teams" {
"elbv2.k8s.aws/pod-readiness-gate-inject" = "enabled",
"appName" = "burnham-team-app",
"projectName" = "project-burnham",
+ #"pod-security.kubernetes.io/enforce" = "restricted",
}
}
riker = {
@@ -619,109 +508,164 @@ module "eks_blueprints_ecsdemo_teams" {
tags = local.tags
}
-module "kubernetes_addons" {
- source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.32.1"
+################################################################################
+# GitOps Bridge: Private ssh keys for git
+################################################################################
+data "aws_secretsmanager_secret" "workload_repo_secret" {
+ name = local.aws_secret_manager_git_private_ssh_key_name
+}
- eks_cluster_id = module.eks.cluster_name
- eks_cluster_domain = local.eks_cluster_domain
+data "aws_secretsmanager_secret_version" "workload_repo_secret" {
+ secret_id = data.aws_secretsmanager_secret.workload_repo_secret.id
+}
- #---------------------------------------------------------------
- # ARGO CD ADD-ON
- #---------------------------------------------------------------
+resource "kubernetes_namespace" "argocd" {
+ depends_on = [module.eks_blueprints_addons]
+ metadata {
+ name = "argocd"
+ }
+}
- enable_argocd = true
- argocd_manage_add_ons = true # Indicates that ArgoCD is responsible for managing/deploying Add-ons.
+resource "kubernetes_secret" "git_secrets" {
- argocd_applications = {
- addons = local.addons_application
- workloads = local.workload_application
- ecsdemo = local.ecsdemo_application
+ for_each = {
+ git-addons = {
+ type = "git"
+ url = local.gitops_addons_url
+ # comment if you want to uses public repo wigh syntax "https://github.com/xxx" syntax, uncomment when using syntax "git@github.com:xxx"
+ sshPrivateKey = data.aws_secretsmanager_secret_version.workload_repo_secret.secret_string
+ }
+ git-workloads = {
+ type = "git"
+ url = local.gitops_workloads_url
+ # comment if you want to uses public repo wigh syntax "https://github.com/xxx" syntax, uncomment when using syntax "git@github.com:xxx"
+ sshPrivateKey = data.aws_secretsmanager_secret_version.workload_repo_secret.secret_string
+ }
}
+ metadata {
+ name = each.key
+ namespace = kubernetes_namespace.argocd.metadata[0].name
+ labels = {
+ "argocd.argoproj.io/secret-type" = "repo-creds"
+ }
+ }
+ data = each.value
+}
- # This example shows how to set default ArgoCD Admin Password using SecretsManager with Helm Chart set_sensitive values.
- argocd_helm_config = {
- set_sensitive = [
- {
- name = "configs.secret.argocdServerAdminPassword"
- value = bcrypt(data.aws_secretsmanager_secret_version.admin_password_version.secret_string)
- }
- ]
+################################################################################
+# GitOps Bridge: Bootstrap
+################################################################################
+module "gitops_bridge_bootstrap" {
+ source = "github.com/gitops-bridge-dev/gitops-bridge-argocd-bootstrap-terraform?ref=v2.0.0"
+
+ cluster = {
+ cluster_name = module.eks.cluster_name
+ environment = local.environment
+ metadata = local.addons_metadata
+ addons = local.addons
+ }
+ apps = local.argocd_apps
+
+ argocd = {
+ create_namespace = false
set = [
{
name = "server.service.type"
value = "LoadBalancer"
}
]
+ set_sensitive = [
+ {
+ name = "configs.secret.argocdServerAdminPassword"
+ value = bcrypt(data.aws_secretsmanager_secret_version.admin_password_version.secret_string)
+ }
+ ]
}
- #---------------------------------------------------------------
- # EKS Managed AddOns
- # https://aws-ia.github.io/terraform-aws-eks-blueprints/add-ons/
- #---------------------------------------------------------------
-
- enable_amazon_eks_coredns = true
- amazon_eks_coredns_config = {
- most_recent = true
- kubernetes_version = local.cluster_version
- resolve_conflicts = "OVERWRITE"
- }
+ depends_on = [kubernetes_secret.git_secrets]
+}
- enable_amazon_eks_aws_ebs_csi_driver = true
- amazon_eks_aws_ebs_csi_driver_config = {
- most_recent = true
- kubernetes_version = local.cluster_version
- resolve_conflicts = "OVERWRITE"
- }
+################################################################################
+# EKS Blueprints Addons
+################################################################################
+module "eks_blueprints_addons" {
+ source = "aws-ia/eks-blueprints-addons/aws"
- enable_amazon_eks_kube_proxy = true
- amazon_eks_kube_proxy_config = {
- most_recent = true
- kubernetes_version = local.cluster_version
- resolve_conflicts = "OVERWRITE"
- }
+ cluster_name = module.eks.cluster_name
+ cluster_endpoint = module.eks.cluster_endpoint
+ cluster_version = module.eks.cluster_version
+ oidc_provider_arn = module.eks.oidc_provider_arn
- enable_amazon_eks_vpc_cni = true
- amazon_eks_vpc_cni_config = {
- most_recent = true
- kubernetes_version = local.cluster_version
- resolve_conflicts = "OVERWRITE"
- }
+ # Using GitOps Bridge
+ create_kubernetes_resources = false
- #---------------------------------------------------------------
- # ADD-ONS - You can add additional addons here
- # https://aws-ia.github.io/terraform-aws-eks-blueprints/add-ons/
- #---------------------------------------------------------------
+ eks_addons = {
- enable_metrics_server = local.metrics_server
- enable_vpa = local.vpa
- enable_aws_load_balancer_controller = local.aws_load_balancer_controller
- aws_load_balancer_controller_helm_config = {
- service_account = "aws-lb-sa"
+ # Remove for workshop as ebs-csi is long to provision (15mn)
+ # aws-ebs-csi-driver = {
+ # most_recent = true
+ # service_account_role_arn = module.ebs_csi_driver_irsa.iam_role_arn
+ # }
+ coredns = {
+ most_recent = true
+ }
+ vpc-cni = {
+ # Specify the VPC CNI addon should be deployed before compute to ensure
+ # the addon is configured before data plane compute resources are created
+ # See README for further details
+ service_account_role_arn = module.vpc_cni_irsa.iam_role_arn
+ before_compute = true
+ #addon_version = "v1.12.2-eksbuild.1"
+ most_recent = true # To ensure access to the latest settings provided
+ configuration_values = jsonencode({
+ env = {
+ # Reference docs https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
+ ENABLE_PREFIX_DELEGATION = "true"
+ WARM_PREFIX_TARGET = "1"
+ }
+ })
+ }
+ kube-proxy = {
+ most_recent = true
+ }
}
- enable_karpenter = local.karpenter
- enable_aws_for_fluentbit = local.aws_for_fluentbit
- enable_aws_cloudwatch_metrics = local.cloudwatch_metrics
-
- #to view the result : terraform state show 'module.kubernetes_addons.module.external_dns[0].module.helm_addon.helm_release.addon[0]'
- enable_external_dns = local.external_dns
-
- external_dns_helm_config = {
- txtOwnerId = local.name
- zoneIdFilter = data.aws_route53_zone.sub.zone_id # Note: this uses GitOpsBridge
- policy = "sync"
- logLevel = "debug"
+
+ # EKS Blueprints Addons
+ enable_cert_manager = try(local.aws_addons.enable_cert_manager, false)
+ #enable_aws_ebs_csi_resources = try(local.aws_addons.enable_aws_ebs_csi_resources, false)
+ enable_aws_efs_csi_driver = try(local.aws_addons.enable_aws_efs_csi_driver, false)
+ enable_aws_fsx_csi_driver = try(local.aws_addons.enable_aws_fsx_csi_driver, false)
+ enable_aws_cloudwatch_metrics = try(local.aws_addons.enable_aws_cloudwatch_metrics, false)
+ enable_aws_privateca_issuer = try(local.aws_addons.enable_aws_privateca_issuer, false)
+ enable_cluster_autoscaler = try(local.aws_addons.enable_cluster_autoscaler, false)
+ enable_external_dns = try(local.aws_addons.enable_external_dns, false)
+ external_dns_route53_zone_arns = [data.aws_route53_zone.sub.arn]
+ enable_external_secrets = try(local.aws_addons.enable_external_secrets, false)
+ enable_aws_load_balancer_controller = try(local.aws_addons.enable_aws_load_balancer_controller, false)
+ aws_load_balancer_controller = {
+ service_account_name = "aws-lb-sa"
}
- enable_kubecost = local.kubecost
- enable_cert_manager = local.cert_manager
- enable_argo_rollouts = local.argo_rollouts
+ enable_fargate_fluentbit = try(local.aws_addons.enable_fargate_fluentbit, false)
+ enable_aws_for_fluentbit = try(local.aws_addons.enable_aws_for_fluentbit, false)
+ enable_aws_node_termination_handler = try(local.aws_addons.enable_aws_node_termination_handler, false)
+ aws_node_termination_handler_asg_arns = [for asg in module.eks.self_managed_node_groups : asg.autoscaling_group_arn]
+ enable_karpenter = try(local.aws_addons.enable_karpenter, false)
+ enable_velero = try(local.aws_addons.enable_velero, false)
+ #velero = {
+ # s3_backup_location = "${module.velero_backup_s3_bucket.s3_bucket_arn}/backups"
+ #}
+ enable_aws_gateway_api_controller = try(local.aws_addons.enable_aws_gateway_api_controller, false)
+ #enable_aws_secrets_store_csi_driver_provider = try(local.enable_aws_secrets_store_csi_driver_provider, false)
+ tags = local.tags
}
+
module "ebs_csi_driver_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "~> 5.20"
- role_name_prefix = "${module.eks.cluster_name}-ebs-csi-driver-"
+ role_name_prefix = "${module.eks.cluster_name}-ebs-csi-"
attach_ebs_csi_policy = true
@@ -753,18 +697,3 @@ module "vpc_cni_irsa" {
tags = local.tags
}
-################################################################################
-# Karpenter
-################################################################################
-
-# Creates Karpenter native node termination handler resources and IAM instance profile
-module "karpenter" {
- source = "terraform-aws-modules/eks/aws//modules/karpenter"
- version = "~> 19.15.2"
-
- cluster_name = module.eks.cluster_name
- irsa_oidc_provider_arn = module.eks.oidc_provider_arn
- create_irsa = false # IRSA will be created by the kubernetes-addons module
-
- tags = local.tags
-}
diff --git a/patterns/blue-green-upgrade/modules/eks_cluster/outputs.tf b/patterns/blue-green-upgrade/modules/eks_cluster/outputs.tf
index dba52fc9fd..8c8c5f5a90 100644
--- a/patterns/blue-green-upgrade/modules/eks_cluster/outputs.tf
+++ b/patterns/blue-green-upgrade/modules/eks_cluster/outputs.tf
@@ -3,6 +3,11 @@ output "eks_cluster_id" {
value = module.eks.cluster_name
}
+output "configure_kubectl" {
+ description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
+ value = "aws eks --region ${var.aws_region} update-kubeconfig --name ${module.eks.cluster_name}"
+}
+
output "eks_blueprints_platform_teams_configure_kubectl" {
description = "Configure kubectl Platform Team: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = "aws eks --region ${var.aws_region} update-kubeconfig --name ${module.eks.cluster_name} --role-arn ${module.eks_blueprints_platform_teams.iam_role_arn}"
@@ -27,3 +32,27 @@ output "cluster_certificate_authority_data" {
description = "cluster_certificate_authority_data"
value = module.eks.cluster_certificate_authority_data
}
+
+output "access_argocd" {
+ description = "ArgoCD Access"
+ value = <<-EOT
+ export KUBECONFIG="/tmp/${module.eks.cluster_name}"
+ aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}
+ echo "ArgoCD URL: https://$(kubectl get svc -n argocd argo-cd-argocd-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')"
+ echo "ArgoCD Username: admin"
+ echo "ArgoCD Password: $(aws secretsmanager get-secret-value --secret-id argocd-admin-secret.${local.environment} --query SecretString --output text --region ${local.region})"
+ EOT
+}
+
+output "gitops_metadata" {
+ description = "export gitops_metadata"
+ value = local.addons_metadata
+ sensitive = true
+}
+
+# output "debug" {
+# description = "debug output"
+# #value = data.template_file.addons_template.rendered
+# value = data.template_file.workloads_template.rendered
+# #value = file("${path.module}/../../bootstrap/addons.yaml")
+# }
diff --git a/patterns/blue-green-upgrade/modules/eks_cluster/variables.tf b/patterns/blue-green-upgrade/modules/eks_cluster/variables.tf
index dbd4fca8fe..087f7fd1b3 100644
--- a/patterns/blue-green-upgrade/modules/eks_cluster/variables.tf
+++ b/patterns/blue-green-upgrade/modules/eks_cluster/variables.tf
@@ -3,22 +3,17 @@ variable "aws_region" {
type = string
default = "us-west-2"
}
+
variable "environment_name" {
description = "The name of Environment Infrastructure stack, feel free to rename it. Used for cluster and VPC names."
type = string
default = "eks-blueprint"
}
-variable "service_name" {
- description = "The name of the Suffix for the stack name"
+variable "ingress_type" {
type = string
- default = "blue"
-}
-
-variable "cluster_version" {
- description = "The Version of Kubernetes to deploy"
- type = string
- default = "1.25"
+ description = "Type of ingress to uses (alb | nginx | ...). this parameter will be sent to arocd via gitops bridge"
+ default = "alb"
}
variable "hosted_zone_name" {
@@ -33,62 +28,94 @@ variable "eks_admin_role_name" {
default = ""
}
-variable "workload_repo_url" {
+variable "aws_secret_manager_git_private_ssh_key_name" {
type = string
- description = "Git repo URL for the ArgoCD workload deployment"
- default = "https://github.com/aws-samples/eks-blueprints-workloads.git"
+ description = "Secret Manager secret name for hosting Github SSH-Key to Access private repository"
+ default = "github-blueprint-ssh-key"
}
-variable "workload_repo_secret" {
+variable "argocd_secret_manager_name_suffix" {
type = string
- description = "Secret Manager secret name for hosting Github SSH-Key to Access private repository"
- default = "github-blueprint-ssh-key"
+ description = "Name of secret manager secret for ArgoCD Admin UI Password"
+ default = "argocd-admin-secret"
}
-variable "workload_repo_revision" {
+variable "gitops_addons_org" {
type = string
- description = "Git repo revision in workload_repo_url for the ArgoCD workload deployment"
- default = "main"
+ description = "Git repository org/user contains for addons"
+ default = "git@github.com:aws-samples"
+}
+variable "gitops_addons_repo" {
+ type = string
+ description = "Git repository contains for addons"
+ default = "eks-blueprints-add-ons"
+}
+variable "gitops_addons_basepath" {
+ type = string
+ description = "Git repository base path for addons"
+ default = "argocd/"
+}
+variable "gitops_addons_path" {
+ type = string
+ description = "Git repository path for addons"
+ default = "argocd/bootstrap/control-plane/addons"
+}
+variable "gitops_addons_revision" {
+ type = string
+ description = "Git repository revision/branch/ref for addons"
+ default = "HEAD"
}
-variable "workload_repo_path" {
+variable "gitops_workloads_org" {
+ type = string
+ description = "Git repository org/user contains for workloads"
+ default = "git@github.com:aws-samples"
+}
+
+variable "gitops_workloads_repo" {
+ type = string
+ description = "Git repository contains for workloads"
+ default = "eks-blueprints-workloads"
+}
+
+variable "gitops_workloads_path" {
type = string
description = "Git repo path in workload_repo_url for the ArgoCD workload deployment"
default = "envs/dev"
}
-variable "addons_repo_url" {
+variable "gitops_workloads_revision" {
type = string
- description = "Git repo URL for the ArgoCD addons deployment"
- default = "https://github.com/aws-samples/eks-blueprints-add-ons.git"
+ description = "Git repo revision in gitops_workloads_url for the ArgoCD workload deployment"
+ default = "main"
}
-variable "iam_platform_user" {
+variable "service_name" {
+ description = "The name of the Suffix for the stack name"
type = string
- description = "IAM user used as platform-user"
- default = ""
+ default = "blue"
}
-variable "argocd_secret_manager_name_suffix" {
+variable "cluster_version" {
+ description = "The Version of Kubernetes to deploy"
type = string
- description = "Name of secret manager secret for ArgoCD Admin UI Password"
- default = "argocd-admin-secret"
+ default = "1.25"
}
variable "argocd_route53_weight" {
description = "The Route53 weighted records weight for argocd application"
type = string
- default = "0"
+ default = "100"
}
variable "ecsfrontend_route53_weight" {
description = "The Route53 weighted records weight for ecsdeo-frontend application"
type = string
- default = "0"
+ default = "100"
}
variable "route53_weight" {
description = "The Route53 weighted records weight for others application"
type = string
- default = "0"
+ default = "100"
}
diff --git a/patterns/blue-green-upgrade/modules/eks_cluster/versions.tf b/patterns/blue-green-upgrade/modules/eks_cluster/versions.tf
index 729454b581..eda2864d1e 100644
--- a/patterns/blue-green-upgrade/modules/eks_cluster/versions.tf
+++ b/patterns/blue-green-upgrade/modules/eks_cluster/versions.tf
@@ -1,10 +1,19 @@
terraform {
- required_version = ">= 1.0"
+ required_version = ">= 1.4.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0.0"
}
+
+ kubernetes = {
+ source = "hashicorp/kubernetes"
+ version = "2.22.0"
+ }
+ template = {
+ source = "hashicorp/template"
+ version = ">= 2.2.0"
+ }
}
}
diff --git a/patterns/blue-green-upgrade/static/gitops-bridge.excalidraw.png b/patterns/blue-green-upgrade/static/gitops-bridge.excalidraw.png
new file mode 100644
index 0000000000..f90a3927ee
Binary files /dev/null and b/patterns/blue-green-upgrade/static/gitops-bridge.excalidraw.png differ
diff --git a/patterns/blue-green-upgrade/tear-down-applications.sh b/patterns/blue-green-upgrade/tear-down-applications.sh
new file mode 100755
index 0000000000..4e8090e010
--- /dev/null
+++ b/patterns/blue-green-upgrade/tear-down-applications.sh
@@ -0,0 +1,118 @@
+#!/bin/bash
+#set -e
+#set -x
+
+#export ARGOCD_PWD=$(aws secretsmanager get-secret-value --secret-id argocd-admin-secret.eks-blueprint --query SecretString --output text --region eu-west-3)
+#export ARGOCD_OPTS="--port-forward --port-forward-namespace argocd --grpc-web"
+#argocd login --port-forward --username admin --password $ARGOCD_PWD --insecure
+
+
+function delete_argocd_appset_except_pattern() {
+ # List all your app to destroy
+ # Get the list of ArgoCD applications and store them in an array
+ #applicationsets=($(kubectl get applicationset -A -o json | jq -r '.items[] | .metadata.namespace + "/" + .metadata.name'))
+ applicationsets=($(kubectl get applicationset -A -o json | jq -r '.items[] | .metadata.name'))
+
+ # Iterate over the applications and delete them
+ for app in "${applicationsets[@]}"; do
+ if [[ ! "$app" =~ $1 ]]; then
+ echo "Deleting applicationset: $app"
+ kubectl delete ApplicationSet -n argocd $app --cascade=orphan
+ else
+ echo "Skipping deletion of applicationset: $app (contain '$1')"
+ fi
+ done
+
+ #Wait for everything to delete
+ continue_process=true
+ while $continue_process; do
+ # Get the list of ArgoCD applications and store them in an array
+ applicationsets=($(kubectl get applicationset -A -o json | jq -r '.items[] | .metadata.name'))
+
+ still_have_application=false
+ # Iterate over the applications and delete them
+ for app in "${applicationsets[@]}"; do
+ if [[ ! "$app" =~ $1 ]]; then
+ echo "applicationset $app still exists"
+ still_have_application=true
+ fi
+ done
+ sleep 5
+ continue_process=$still_have_application
+ done
+ echo "No more applicationsets except $1"
+}
+
+function delete_argocd_app_except_pattern() {
+ # List all your app to destroy
+ # Get the list of ArgoCD applications and store them in an array
+ #applications=($(argocd app list -o name))
+ applications=($(kubectl get application -A -o json | jq -r '.items[] | .metadata.name'))
+
+ # Iterate over the applications and delete them
+ for app in "${applications[@]}"; do
+ if [[ ! "$app" =~ $1 ]]; then
+ echo "Deleting application: $app"
+ kubectl -n argocd patch app $app -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge
+ kubectl -n argocd delete app $app
+ else
+ echo "Skipping deletion of application: $app (contain '$1')"
+ fi
+ done
+
+ # Wait for everything to delete
+ continue_process=true
+ while $continue_process; do
+ # Get the list of ArgoCD applications and store them in an array
+ #applications=($(argocd app list -o name))
+ applications=($(kubectl get application -A -o json | jq -r '.items[] | .metadata.name'))
+
+ still_have_application=false
+ # Iterate over the applications and delete them
+ for app in "${applications[@]}"; do
+ if [[ ! "$app" =~ $1 ]]; then
+ echo "application $app still exists"
+ still_have_application=true
+ fi
+ done
+ sleep 5
+ continue_process=$still_have_application
+ done
+ echo "No more applications except $1"
+}
+
+function wait_for_deletion() {
+ # Loop until all Ingress resources are deleted
+ while true; do
+ # Get the list of Ingress resources in the specified namespace
+ ingress_list=$(kubectl get ingress -A -o json)
+
+ # Check if there are no Ingress resources left
+ if [[ "$(echo "$ingress_list" | jq -r '.items | length')" -eq 0 ]]; then
+ echo "All Ingress resources have been deleted."
+ break
+ fi
+ echo "waiting for deletion"
+ # Wait for a while before checking again (adjust the sleep duration as needed)
+ sleep 5
+done
+}
+
+echo "#1. First, we deactivate application sets"
+delete_argocd_appset_except_pattern "^nomatch"
+
+echo "#2. No we delete all app except addons"
+delete_argocd_app_except_pattern "^.*addon-|^.*argo-cd|^bootstrap-addons|^team-platform"
+
+echo "#3. Wait for objects to be deleted"
+wait_for_deletion
+
+
+echo "#4. Then we delete all addons except LBC and external-dns"
+delete_argocd_app_except_pattern "^.*load-balancer|^.*external-dns|^.*argo-cd|^bootstrap-addons"
+
+#delete_argocd_app_except_pattern "^.*load-balancer"
+
+echo "Tear Down Applications OK"
+
+set +x
diff --git a/patterns/blue-green-upgrade/tear-down.sh b/patterns/blue-green-upgrade/tear-down.sh
index 66fe877400..0fd665adeb 100755
--- a/patterns/blue-green-upgrade/tear-down.sh
+++ b/patterns/blue-green-upgrade/tear-down.sh
@@ -1,14 +1,40 @@
#!/bin/bash
-set -e
+#set -e
+set -x
-# First tear down Applications
-kubectl delete provisioners.karpenter.sh --all # this is ok if no addons are deployed on Karpenter.
-kubectl delete application workloads -n argocd || (echo "error deleting workloads application"; exit -1)
-kubectl delete application ecsdemo -n argocd || (echo "error deleting ecsdemo application" && exit -1)
+# Get the directory of the currently executing script (shell1.sh)
+SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+
+{ "$SCRIPT_DIR/tear-down-applications.sh"; } || {
+ echo "Error occurred while deleting application"
+
+ # Ask the user if they want to continue
+ read -p "Do you want to continue with cluster deletion (y/n)? " choice
+ case "$choice" in
+ y|Y ) echo "Continuing with the rest of shell1.sh";;
+ * ) echo "Exiting.."; exit;;
+ esac
+}
+
+
+#terraform destroy -target="module.eks_cluster.module.gitops_bridge_bootstrap" -auto-approve
# Then Tear down the cluster
-terraform apply -destroy -target="module.eks_cluster.module.kubernetes_addons" -auto-approve || (echo "error deleting module.eks_cluster.module.kubernetes_addons" && exit -1)
-terraform apply -destroy -target="module.eks_cluster.module.eks" -auto-approve || (echo "error deleting module.eks_cluster.module.eks" && exit -1)
-terraform apply -destroy -auto-approve || (echo "error deleting terraform" && exit -1)
+terraform destroy -target="module.eks_cluster.module.kubernetes_addons" -auto-approve || (echo "error deleting module.eks_cluster.module.kubernetes_addons" && exit -1)
+terraform destroy -target="module.eks_cluster.module.eks_blueprints_platform_teams" -auto-approve || (echo "error deleting module.eks_cluster.module.eks_blueprints_platform_teams" && exit -1)
+terraform destroy -target="module.eks_cluster.module.eks_blueprints_dev_teams" -auto-approve || (echo "error deleting module.eks_cluster.module.eks_blueprints_dev_teams" && exit -1)
+terraform destroy -target="module.eks_cluster.module.eks_blueprints_ecsdemo_teams" -auto-approve || (echo "error deleting module.eks_cluster.module.eks_blueprints_ecsdemo_teams" && exit -1)
+
+terraform destroy -target="module.eks_cluster.module.gitops_bridge_bootstrap" -auto-approve || (echo "error deleting module.eks_cluster.module.gitops_bridge_bootstrap" && exit -1)
+terraform destroy -target="module.eks_cluster.module.gitops_bridge_metadata" -auto-approve || (echo "error deleting module.eks_cluster.module.gitops_bridge_metadata" && exit -1)
+
+terraform destroy -target="module.eks_cluster.module.eks_blueprints_addons" -auto-approve || (echo "error deleting module.eks_cluster.module.eks" && exit -1)
+
+terraform destroy -target="module.eks_cluster.module.ebs_csi_driver_irsa" --auto-approve
+terraform destroy -target="module.eks_cluster.module.vpc_cni_irsa" --auto-approve
+terraform destroy -target="module.eks_cluster.module.eks" -auto-approve || (echo "error deleting module.eks_cluster.module.eks" && exit -1)
+
+terraform destroy -auto-approve || (echo "error deleting terraform" && exit -1)
echo "Tear Down OK"
+set +x
diff --git a/patterns/blue-green-upgrade/terraform.tfvars.example b/patterns/blue-green-upgrade/terraform.tfvars.example
index 6ff5fcc5c3..4687c83db5 100644
--- a/patterns/blue-green-upgrade/terraform.tfvars.example
+++ b/patterns/blue-green-upgrade/terraform.tfvars.example
@@ -6,7 +6,8 @@ hosted_zone_name = "eks.mydomain.org" # your Existing Hosted Zone
eks_admin_role_name = "Admin" # Additional role admin in the cluster (usually the role I use in the AWS console)
# EKS Blueprint AddOns ArgoCD App of App repository
-addons_repo_url = "git@github.com:aws-samples/eks-blueprints-add-ons.git"
+gitops_bridge_repo_url = "git@github.com:gitops-bridge-dev/gitops-bridge-argocd-control-plane-template"
+gitops_bridge_repo_revision = "HEAD"
# EKS Blueprint Workloads ArgoCD App of App repository
workload_repo_url = "git@github.com:aws-samples/eks-blueprints-workloads.git"