Skip to content

Commit

Permalink
feat: Upgrade bluegreen example using new gitops bridge (#1769)
Browse files Browse the repository at this point in the history
  • Loading branch information
allamand authored Sep 22, 2023
1 parent b932e7b commit b1386b8
Show file tree
Hide file tree
Showing 21 changed files with 796 additions and 523 deletions.
83 changes: 20 additions & 63 deletions patterns/blue-green-upgrade/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ We are leveraging [the existing EKS Blueprints Workloads GitHub repository sampl
## Table of content

- [Blue/Green or Canary Amazon EKS clusters migration for stateless ArgoCD workloads](#bluegreen-or-canary-amazon-eks-clusters-migration-for-stateless-argocd-workloads)
- [Blue/Green Migration](#bluegreen-migration)
- [Table of content](#table-of-content)
- [Project structure](#project-structure)
- [Prerequisites](#prerequisites)
Expand All @@ -25,8 +25,6 @@ We are leveraging [the existing EKS Blueprints Workloads GitHub repository sampl
- [Delete the Stack](#delete-the-stack)
- [Delete the EKS Cluster(s)](#delete-the-eks-clusters)
- [TL;DR](#tldr)
- [Manual](#manual)
- [Delete the environment stack](#delete-the-environment-stack)
- [Troubleshoot](#troubleshoot)
- [External DNS Ownership](#external-dns-ownership)
- [Check Route 53 Record status](#check-route-53-record-status)
Expand All @@ -53,7 +51,11 @@ In the GitOps workload repository, we have configured our applications deploymen

We have configured ExternalDNS add-ons in our two clusters to share the same Route53 Hosted Zone. The workloads in both clusters also share the same Route 53 DNS records, we rely on AWS Route53 weighted records to allow us to configure canary workload migration between our two EKS clusters.

Here we use the same GitOps workload configuration repository and adapt parameters with the `values.yaml`. We could also use different ArgoCD repository for each cluster, or use a new directory if we want to validate or test new deployment manifests with maybe additional features, configurations or to use with different Kubernetes add-ons (like changing ingress controller).
We are leveraging the [gitops-bridge-argocd-bootstrap](https://github.com/gitops-bridge-dev/gitops-bridge-argocd-bootstrap-terraform) terraform module that allow us to dynamically provide metadatas from Terraform to ArgoCD deployed in the cluster. For doing this, the module will extract all metadatas from the [terraform-aws-eks-blueprints-addons](https://github.com/aws-ia/terraform-aws-eks-blueprints-addons) module, configured to create all resources except installing the addon's Helm chart. The addon Installation will be delegate to ArgoCD Itself using the [eks-blueprints-add-ons](https://github.com/aws-samples/eks-blueprints-add-ons/tree/main/argocd/bootstrap/control-plane/addons) git repository containing ArgoCD ApplicaitonSets for each supported Addons.

The gitops-bridge will create a secret in the EKS cluster containing all metadatas that will be dynamically used by ArgoCD ApplicationSets at deployment time, so that we can adapt their configuration to our EKS cluster context.

<img src="static/gitops-bridge.excalidraw.png" width=100%>

Our objective here is to show you how Application teams and Platform teams can configure their infrastructure and workloads so that application teams are able to deploy autonomously their workloads to the EKS clusters thanks to ArgoCD, and platform team can keep the control of migrating production workloads from one cluster to another without having to synchronized operations with applications teams, or asking them to build a complicated CD pipeline.

Expand Down Expand Up @@ -82,12 +84,13 @@ git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
cd patterns/blue-green-upgrade/
```

2. Copy the `terraform.tfvars.example` to `terraform.tfvars` on each `environment`, `eks-blue` and `eks-green` folders, and change region, hosted_zone_name, eks_admin_role_name according to your needs.
2. Copy the `terraform.tfvars.example` to `terraform.tfvars` and symlink it on each `environment`, `eks-blue` and `eks-green` folders, and change region, hosted_zone_name, eks_admin_role_name according to your needs.

```shell
cp terraform.tfvars.example environment/terraform.tfvars
cp terraform.tfvars.example eks-blue/terraform.tfvars
cp terraform.tfvars.example eks-green/terraform.tfvars
cp terraform.tfvars.example terraform.tfvars
ln -s ../terraform.tfvars environment/terraform.tfvars
ln -s ../terraform.tfvars eks-blue/terraform.tfvars
ln -s ../terraform.tfvars eks-blue/terraform.tfvars
```

- You will need to provide the `hosted_zone_name` for example `my-example.com`. Terraform will create a new hosted zone for the project with name: `${environment}.${hosted_zone_name}` so in our example `eks-blueprint.my-example.com`.
Expand Down Expand Up @@ -208,19 +211,19 @@ eks-blueprint-blue

We have configured both our clusters to configure the same [Amazon Route 53](https://aws.amazon.com/fr/route53/) Hosted Zones. This is done by having the same configuration of [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) add-on in `main.tf`:

This is the Terraform configuration to configure the ExternalDNS Add-on which is deployed by the Blueprint using ArgoCD.
This is the Terraform configuration to configure the ExternalDNS Add-on which is deployed by the Blueprint using ArgoCD. we specify the Route53 zone that external-dns needs to monitor.

```
enable_external_dns = true
external_dns_route53_zone_arns = [data.aws_route53_zone.sub.arn]
```

we also configure the addons_metadata to provide more configurations to external-dns:

external_dns_helm_config = {
txtOwnerId = local.name
zoneIdFilter = data.aws_route53_zone.sub.zone_id
policy = "sync"
awszoneType = "public"
zonesCacheDuration = "1h"
logLevel = "debug"
}
```
addons_metadata = merge(
...
external_dns_policy = "sync"
```

- We use ExternalDNS in `sync` mode so that the controller can create but also remove DNS records accordingly to service or ingress objects creation.
Expand Down Expand Up @@ -349,52 +352,6 @@ Why doing this? When we remove an ingress object, we want the associated Kuberne
../tear-down.sh
```

#### Manual

1. If also deployed, delete your Karpenter provisioners

this is safe to delete if no addons are deployed on Karpenter, which is the case here.
If not we should separate the team-platform deployments which installed Karpenter provisioners in a separate ArgoCD Application to avoid any conflicts.

```bash
kubectl delete provisioners.karpenter.sh --all
```

2. Delete Workloads App of App

```bash
kubectl delete application workloads -n argocd
```

3. If also deployed, delete ecsdemo App of App

```bash
kubectl delete application ecsdemo -n argocd
```

Once every workload applications as been freed on AWS side, (this can take some times), we can then destroy our add-ons and terraform resources

> Note: it can take time to deregister all load balancers, verify that you don't have any more AWS resources created by EKS prior to start destroying EKS with terraform.
4. Destroy terraform resources

```bash
terraform apply -destroy -target="module.eks_cluster.module.kubernetes_addons" -auto-approve
terraform apply -destroy -target="module.eks_cluster.module.eks" -auto-approve
terraform apply -destroy -auto-approve
```

### Delete the environment stack

If you have finish playing with this solution, and once you have destroyed the 2 EKS clusters, you can now delete the environment stack.

```bash
cd environment
terraform apply -destroy -auto-approve
```

This will destroy the Route53 hosted zone, the Certificate manager certificate, the VPC with all it's associated resources.

## Troubleshoot

### External DNS Ownership
Expand Down
32 changes: 32 additions & 0 deletions patterns/blue-green-upgrade/bootstrap/addons.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: bootstrap-addons
namespace: argocd
spec:
syncPolicy:
preserveResourcesOnDeletion: true
generators:
- clusters:
selector:
matchExpressions:
- key: akuity.io/argo-cd-cluster-name
operator: NotIn
values: [in-cluster]
template:
metadata:
name: 'bootstrap-addons'
spec:
project: default
source:
repoURL: '{{metadata.annotations.addons_repo_url}}'
path: '{{metadata.annotations.addons_repo_path}}'
targetRevision: '{{metadata.annotations.addons_repo_revision}}'
directory:
recurse: true
exclude: exclude/*
destination:
namespace: 'argocd'
name: '{{name}}'
syncPolicy:
automated: {}
67 changes: 67 additions & 0 deletions patterns/blue-green-upgrade/bootstrap/workloads.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: bootstrap-workloads
namespace: argocd
spec:
goTemplate: true
syncPolicy:
preserveResourcesOnDeletion: true
generators:
- matrix:
generators:
- clusters:
selector:
matchExpressions:
- key: akuity.io/argo-cd-cluster-name
operator: NotIn
values:
- in-cluster
- git:
repoURL: '{{.metadata.annotations.gitops_workloads_url}}'
revision: '{{.metadata.annotations.gitops_workloads_revision}}'
directories:
- path: '{{.metadata.annotations.gitops_workloads_path}}/*'
template:
metadata:
name: 'bootstrap-workloads-{{.name}}'
spec:
project: default
sources:
- repoURL: '{{.metadata.annotations.gitops_workloads_url}}'
targetRevision: '{{.metadata.annotations.gitops_workloads_revision}}'
ref: values
path: '{{.metadata.annotations.gitops_workloads_path}}'
helm:
releaseName: 'bootstrap-workloads-{{.name}}'
ignoreMissingValueFiles: true
values: |
"account": "{{.metadata.annotations.aws_account_id}}"
"clusterName": "{{.metadata.annotations.cluster_name}}"
"labels":
"env": "{{.metadata.annotations.env}}"
"region": "{{.metadata.annotations.aws_region}}"
"repoUrl": "{{.metadata.annotations.gitops_workloads_url}}"
"spec":
"source":
"repoURL": "{{.metadata.annotations.gitops_workloads_url}}"
"targetRevision": "{{.metadata.annotations.gitops_workloads_revision}}"
"blueprint": "terraform"
"clusterName": "{{.metadata.annotations.cluster_name}}"
"env": "{{.metadata.annotations.env}}"
"ingress":
"route53_weight": {{default "0" .metadata.annotations.route53_weight}}
"argocd_route53_weight": {{default "0" .metadata.annotations.argocd_route53_weight}}
"ecsfrontend_route53_weight": {{default "0" .metadata.annotations.ecsfrontend_route53_weight}}
"host": {{ default "" .metadata.annotations.eks_cluster_domain }}
"type": "{{.metadata.annotations.ingress_type}}"
"karpenterInstanceProfile": "{{.metadata.annotations.karpenter_node_instance_profile_name}}"
"target_group_arn": {{ default "" .metadata.annotations.target_group_arn }}
"external_lb_url": {{ if index .metadata.annotations "external_lb_dns" }} http://{{ .metadata.annotations.external_lb_dns }}{{ else }}{{ end }}
destination:
name: '{{.name}}'
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
- ServerSideApply=true # Big CRDs.
41 changes: 18 additions & 23 deletions patterns/blue-green-upgrade/eks-blue/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -26,39 +26,34 @@ provider "helm" {
}
}

provider "kubectl" {
apply_retry_count = 10
host = module.eks_cluster.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_cluster.cluster_certificate_authority_data)
load_config_file = false

exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.eks_cluster.eks_cluster_id]
}
}
module "eks_cluster" {
source = "../modules/eks_cluster"

aws_region = var.aws_region
service_name = "blue"
cluster_version = "1.25"
cluster_version = "1.26"

argocd_route53_weight = "100"
route53_weight = "100"
ecsfrontend_route53_weight = "100"

environment_name = var.environment_name
hosted_zone_name = var.hosted_zone_name
eks_admin_role_name = var.eks_admin_role_name
workload_repo_url = var.workload_repo_url
workload_repo_secret = var.workload_repo_secret
workload_repo_revision = var.workload_repo_revision
workload_repo_path = var.workload_repo_path
environment_name = var.environment_name
hosted_zone_name = var.hosted_zone_name
eks_admin_role_name = var.eks_admin_role_name

aws_secret_manager_git_private_ssh_key_name = var.aws_secret_manager_git_private_ssh_key_name
argocd_secret_manager_name_suffix = var.argocd_secret_manager_name_suffix
ingress_type = var.ingress_type

gitops_addons_org = var.gitops_addons_org
gitops_addons_repo = var.gitops_addons_repo
gitops_addons_basepath = var.gitops_addons_basepath
gitops_addons_path = var.gitops_addons_path
gitops_addons_revision = var.gitops_addons_revision

addons_repo_url = var.addons_repo_url
gitops_workloads_org = var.gitops_workloads_org
gitops_workloads_repo = var.gitops_workloads_repo
gitops_workloads_revision = var.gitops_workloads_revision
gitops_workloads_path = var.gitops_workloads_path

iam_platform_user = var.iam_platform_user
argocd_secret_manager_name_suffix = var.argocd_secret_manager_name_suffix
}
20 changes: 18 additions & 2 deletions patterns/blue-green-upgrade/eks-blue/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,33 @@ output "eks_cluster_id" {
value = module.eks_cluster.eks_cluster_id
}

output "configure_kubectl" {
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.configure_kubectl
}

output "eks_blueprints_platform_teams_configure_kubectl" {
description = "Configure kubectl for each Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
description = "Configure kubectl for Platform Team: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.eks_blueprints_platform_teams_configure_kubectl
}

output "eks_blueprints_dev_teams_configure_kubectl" {
description = "Configure kubectl for each Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
description = "Configure kubectl for each Dev Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.eks_blueprints_dev_teams_configure_kubectl
}

output "eks_blueprints_ecsdemo_teams_configure_kubectl" {
description = "Configure kubectl for each Application Teams: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_cluster.eks_blueprints_ecsdemo_teams_configure_kubectl
}

output "access_argocd" {
description = "ArgoCD Access"
value = module.eks_cluster.access_argocd
}

output "gitops_metadata" {
description = "export gitops_metadata"
value = module.eks_cluster.gitops_metadata
sensitive = true
}
2 changes: 1 addition & 1 deletion patterns/blue-green-upgrade/eks-blue/providers.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
terraform {
required_version = ">= 1.0.1"
required_version = ">= 1.4.0"

required_providers {
aws = {
Expand Down
Loading

0 comments on commit b1386b8

Please sign in to comment.