Skip to content

Commit

Permalink
feat: Gitops bridge multi cluster hub-spoke (#1818)
Browse files Browse the repository at this point in the history
Signed-off-by: Carlos Santana <[email protected]>
  • Loading branch information
csantanapr authored Nov 3, 2023
1 parent 5f5e6cb commit 2eae426
Show file tree
Hide file tree
Showing 22 changed files with 1,227 additions and 3 deletions.
7 changes: 7 additions & 0 deletions docs/patterns/gitops-multi-cluster-hub-spoke-argocd.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: GitOps Multi-Cluster Hub-Spoke Topology (ArgoCD)
---

{%
include-markdown "../../patterns/gitops/multi-cluster-hub-spoke-argocd/README.md"
%}
6 changes: 3 additions & 3 deletions patterns/gitops/getting-started-argocd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This tutorial guides you through deploying an Amazon EKS cluster with addons configured via ArgoCD, employing the [GitOps Bridge Pattern](https://github.com/gitops-bridge-dev).

<img src="static/gitops-bridge.drawio.png" width=100%>
<img src="https://raw.githubusercontent.com/aws-ia/terraform-aws-eks-blueprints/main/patterns/gitops/getting-started-argocd/static/gitops-bridge.drawio.png" width=100%>


The [GitOps Bridge Pattern](https://github.com/gitops-bridge-dev) enables Kubernetes administrators to utilize Infrastructure as Code (IaC) and GitOps tools for deploying Kubernetes Addons and Workloads. Addons often depend on Cloud resources that are external to the cluster. The configuration metadata for these external resources is required by the Addons' Helm charts. While IaC is used to create these cloud resources, it is not used to install the Helm charts. Instead, the IaC tool stores this metadata either within GitOps resources in the cluster or in a Git repository. The GitOps tool then extracts these metadata values and passes them to the Helm chart during the Addon installation process. This mechanism forms the bridge between IaC and GitOps, hence the term "GitOps Bridge."
Expand Down Expand Up @@ -38,7 +38,7 @@ terraform apply -target="module.vpc" -auto-approve
terraform apply -target="module.eks" -auto-approve
terraform apply -auto-approve
```
Retrieve `kubectl` config, then execute the output command:
To retrieve `kubectl` config, execute the terraform output command:
```shell
terraform output -raw configure_kubectl
```
Expand Down Expand Up @@ -110,7 +110,7 @@ Wait until all the ArgoCD applications' `HEALTH STATUS` is `Healthy`.
Use `Ctrl+C` or `Cmd+C` to exit the `watch` command. ArgoCD Applications
can take a couple of minutes in order to achieve the Healthy status.
```shell
watch kubectl get applications -n argocd
kubectl get applications -n argocd -w
```
The expected output should look like the following:
```text
Expand Down
191 changes: 191 additions & 0 deletions patterns/gitops/multi-cluster-hub-spoke-argocd/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
# Multi-Cluster centralized hub-spoke topology

This tutorial guides you through deploying an Amazon EKS cluster with addons configured via ArgoCD in a Multi-Cluster Hub-Spoke topoloy, employing the [GitOps Bridge Pattern](https://github.com/gitops-bridge-dev).

<img src="https://raw.githubusercontent.com/aws-ia/terraform-aws-eks-blueprints/main/patterns/gitops/multi-cluster-hub-spoke-argocd/static/gitops-bridge-multi-cluster-hup-spoke.drawio.png" width=100%>


This example deploys ArgoCD on the Hub cluster (i.e. management/control-plane cluster).
The spoke clusters are registered as remote clusters in the Hub Cluster's ArgoCD
The ArgoCD on the Hub Cluster deploys addons and workloads to the spoke clusters

Each spoke cluster gets deployed an app of apps ArgoCD Application with the name `workloads-${env}`

## Prerequisites
Before you begin, make sure you have the following command line tools installed:
- git
- terraform
- kubectl
- argocd

## (Optional) Fork the GitOps git repositories
See the appendix section [Fork GitOps Repositories](#fork-gitops-repositories) for more info on the terraform variables to override.

## Deploy the Hub EKS Cluster
Change directory to `hub`
```shell
cd hub
```
Initialize Terraform and deploy the EKS cluster:
```shell
terraform init
terraform apply -target="module.vpc" -auto-approve
terraform apply -target="module.eks" -auto-approve
terraform apply -auto-approve
```
To retrieve `kubectl` config, execute the terraform output command:
```shell
terraform output -raw configure_kubectl
```
The expected output will have two lines you run in your terminal
```text
export KUBECONFIG="/tmp/hub-spoke"
aws eks --region us-west-2 update-kubeconfig --name getting-started-gitops --alias hub
```
>The first line sets the `KUBECONFIG` environment variable to a temporary file
that includes the cluster name. The second line uses the `aws` CLI to populate
that temporary file with the `kubectl` configuration. This approach offers the
advantage of not altering your existing `kubectl` context, allowing you to work
in other terminal windows without interference.

### Monitor GitOps Progress for Addons
Wait until all the ArgoCD applications' `HEALTH STATUS` is `Healthy`.
Use `Ctrl+C` or `Cmd+C` to exit the `watch` command. ArgoCD Applications
can take a couple of minutes in order to achieve the Healthy status.
```shell
kubectl --context hub get applications -n argocd -w
```
The expected output should look like the following:
```text
NAME SYNC STATUS HEALTH STATUS
addon-in-cluster-argo-cd Synced Healthy
addon-in-cluster-aws-load-balancer-controller Synced Healthy
addon-in-cluster-metrics-server Synced Healthy
cluster-addons Synced Healthy
```

## (Optional) Access ArgoCD
Access to the ArgoCD's UI is completely optional, if you want to do it,
run the commands shown in the Terraform output as the example below:
```shell
terraform output -raw access_argocd
```
The expected output should contain the `kubectl` config followed by `kubectl` command to retrieve
the URL, username, password to login into ArgoCD UI or CLI.
```text
echo "ArgoCD Username: admin"
echo "ArgoCD Password: $(kubectl --context hub get secrets argocd-initial-admin-secret -n argocd --template="{{index .data.password | base64decode}}")"
echo "ArgoCD URL: https://$(kubectl --context hub get svc -n argocd argo-cd-argocd-server -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')"
```


## Verify that ArgoCD Service Accouts has the annotation for IRSA
```shell
kubectl --context hub get sa -n argocd argocd-application-controller -o json | jq '.metadata.annotations."eks.amazonaws.com/role-arn"'
kubectl --context hub get sa -n argocd argocd-server -o json | jq '.metadata.annotations."eks.amazonaws.com/role-arn"'
```
The output should match the `arn` for the IAM Role that will assume the IAM Role in spoke/remote clusters
```text
arn:aws:iam::0123456789:role/argocd-hub-0123abc..
arn:aws:iam::0123456789:role/argocd-hub-0123abc..
```

## Deploy the Spoke EKS Cluster
Use the `deploy.sh` script to create terraform workspace, initialize Terraform, and deploy the EKS clusters:
```shell
cd ../spokes
./deploy.sh dev
./deploy.sh staging
./deploy.sh prod
```
Each environment uses a Terraform workspace

To retrieve `kubectl` config, execute the terraform output command:
```shell
terraform workspace select dev
terraform output -raw configure_kubectl
```
```shell
terraform workspace select staging
terraform output -raw configure_kubectl
```
```shell
terraform workspace select prod
terraform output -raw configure_kubectl
```
### Verify ArgoCD Cluster Secret for Spokes have the correct IAM Role to be assume by Hub Cluster
```shell
for i in dev staging prod ; do echo $i && kubectl --context hub get secret -n argocd spoke-$i --template='{{index .data.config | base64decode}}' ; done
```
The output have a section `awsAuthConfig` with the `clusterName` and the `roleARN` that has write access to the spoke cluster
```json
{
"tlsClientConfig": {
"insecure": false,
"caData" : "LS0tL...."
},
"awsAuthConfig" : {
"clusterName": "hub-spoke-dev",
"roleARN": "arn:aws:iam::0123456789:role/hub-spoke-dev-argocd-spoke"
}
}
```
### Verify the Addons on Spoke Clusters
Verify that the addons are ready:
```shell
for i in dev staging prod ; do echo $i && kubectl --context $i get deployment -n kube-system ; done
```
### Monitor GitOps Progress for Workloads from Hub Cluster (run on Hub Cluster context)
Watch until **all* the Workloads ArgoCD Applications are `Healthy`
```shell
kubectl --context hub get -n argocd applications -w
```
Wait until the ArgoCD Applications `HEALTH STATUS` is `Healthy`. Crl+C to exit the `watch` command
### Verify the Application
Verify that the application configuration is present and the pod is running:
```shell
for i in dev staging prod ; do echo $i && kubectl --context $i get all -n workload ; done
```
### Container Metrics
Check the application's CPU and memory metrics:
```shell
for i in dev staging prod ; do echo $i && kubectl --context $i top pods -n workload ; done
```
## Destroy the Spoke EKS Clusters
To tear down all the resources and the EKS cluster, run the following command:
```shell
./destroy.sh dev
./destroy.sh staging
./destroy.sh prod
```
## Destroy the Hub EKS Clusters
To tear down all the resources and the EKS cluster, run the following command:
Destroy Hub Clusters
```shell
cd ../hub
./destroy.sh
```
## Appendix
## Fork GitOps Repositories
To modify the `values.yaml` file or the helm chart version for addons, you'll need to fork tthe repository [aws-samples/eks-blueprints-add-ons](https://github.com/aws-samples/eks-blueprints-add-ons).
After forking, update the following environment variables to point to your forks, replacing the default values.
```shell
export TF_VAR_gitops_addons_org=https://github.com/aws-samples
export TF_VAR_gitops_addons_repo=eks-blueprints-add-ons
export TF_VAR_gitops_addons_revision=main
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: cluster-addons
namespace: argocd
spec:
syncPolicy:
preserveResourcesOnDeletion: true
generators:
- clusters:
selector:
matchExpressions:
- key: akuity.io/argo-cd-cluster-name
operator: NotIn
values: [in-cluster]
template:
metadata:
name: cluster-addons
spec:
project: default
source:
repoURL: '{{metadata.annotations.addons_repo_url}}'
path: '{{metadata.annotations.addons_repo_basepath}}{{metadata.annotations.addons_repo_path}}'
targetRevision: '{{metadata.annotations.addons_repo_revision}}'
directory:
recurse: true
exclude: exclude/*
destination:
namespace: 'argocd'
name: '{{name}}'
syncPolicy:
automated: {}
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: workloads
namespace: argocd
spec:
syncPolicy:
preserveResourcesOnDeletion: false
generators:
- clusters:
selector:
matchExpressions:
- key: akuity.io/argo-cd-cluster-name
operator: NotIn
values: [in-cluster]
- key: environment
operator: NotIn
values: [control-plane]
template:
metadata:
name: 'workload-{{metadata.labels.environment}}'
spec:
project: default
source:
repoURL: '{{metadata.annotations.workload_repo_url}}'
path: '{{metadata.annotations.workload_repo_basepath}}{{metadata.annotations.workload_repo_path}}'
targetRevision: '{{metadata.annotations.workload_repo_revision}}'
destination:
namespace: 'workload'
name: '{{name}}'
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
25 changes: 25 additions & 0 deletions patterns/gitops/multi-cluster-hub-spoke-argocd/hub/destroy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#!/bin/bash

set -uo pipefail

SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ROOTDIR="$(cd ${SCRIPTDIR}/../..; pwd )"
[[ -n "${DEBUG:-}" ]] && set -x

# Delete the Ingress/SVC before removing the addons
TMPFILE=$(mktemp)
terraform -chdir=$SCRIPTDIR output -raw configure_kubectl > "$TMPFILE"
# check if TMPFILE contains the string "No outputs found"
if [[ ! $(cat $TMPFILE) == *"No outputs found"* ]]; then
source "$TMPFILE"
kubectl delete -n argocd applicationset workloads
kubectl delete -n argocd applicationset cluster-addons
kubectl delete -n argocd applicationset addons-argocd
kubectl delete -n argocd svc argo-cd-argocd-server
fi

terraform destroy -target="module.gitops_bridge_bootstrap" -auto-approve
terraform destroy -target="module.eks_blueprints_addons" -auto-approve
terraform destroy -target="module.eks" -auto-approve
terraform destroy -target="module.vpc" -auto-approve
terraform destroy -auto-approve
Loading

0 comments on commit 2eae426

Please sign in to comment.