Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Gitops bridge multi cluster hub-spoke #1818

Merged
merged 9 commits into from
Nov 3, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/patterns/gitops-multi-cluster-hub-spoke-argocd.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: GitOps Multi-Cluster Hub-Spoke Topology (ArgoCD)
---

{%
include-markdown "../../patterns/gitops/multi-cluster-hub-spoke-argocd/README.md"
%}
159 changes: 159 additions & 0 deletions patterns/gitops/multi-cluster-hub-spoke-argocd/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
# Multi-Cluster centralized hub-spoke topology

This tutorial guides you through deploying an Amazon EKS cluster with addons configured via ArgoCD in a Multi-Cluster Hub-Spoke topoloy, employing the [GitOps Bridge Pattern](https://github.com/gitops-bridge-dev).

<img src="static/gitops-bridge-multi-cluster-hup-spoke.drawio.png" width=100%>


This example deploys ArgoCD on the Hub cluster (ie. management/control-plane cluster).
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
The spoke clusters are registered as remote clusters in the Hub Cluster's ArgoCD
The ArgoCD on the Hub Cluster deploy addons and workloads to the spoke clusters
csantanapr marked this conversation as resolved.
Show resolved Hide resolved

Each spoke cluster gets deployed an app of apps ArgoCD Application with the name `workloads-${env}`

## Prerequisites
Before you begin, make sure you have the following command line tools installed:
- git
- terraform
- kubectl
- argocd

## Fork the Git Repositories

### Fork the Addon GitOps Repo
1. Fork the git repository for addons [here](https://github.com/gitops-bridge-dev/gitops-bridge-argocd-control-plane-template).
2. Update the following environment variables to point to your fork by changing the default values:
```shell
export TF_VAR_gitops_addons_org=https://github.com/gitops-bridge-dev
export TF_VAR_gitops_addons_repo=gitops-bridge-argocd-control-plane-template
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```

## Deploy the Hub EKS Cluster
Change Director to `hub`
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```shell
cd hub
```
Initialize Terraform and deploy the EKS cluster:
```shell
terraform init
terraform apply -auto-approve
```
Retrieve `kubectl` config, then execute the output command:
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```shell
terraform output -raw configure_kubectl
```

### Monitor GitOps Progress for Addons
Wait until **all** the ArgoCD applications' `HEALTH STATUS` is `Healthy`. Use Crl+C to exit the `watch` command
```shell
watch kubectl get applications -n argocd
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```

## Access ArgoCD on Hub Cluster
Access ArgoCD's UI, run the command from the output:
```shell
terraform output -raw access_argocd
```

## Verify that ArgoCD Service Accouts has the annotation for IRSA
```shell
kubectl get sa -n argocd argocd-application-controller -o json | jq '.metadata.annotations."eks.amazonaws.com/role-arn"'
kubectl get sa -n argocd argocd-server -o json | jq '.metadata.annotations."eks.amazonaws.com/role-arn"'
```
The output should match the `arn` for the IAM Role that will assume the IAM Role in spoke/remote clusters
```text
"arn:aws:iam::0123456789:role/hub-spoke-control-plane-argocd-hub"
```

## Deploy the Spoke EKS Cluster
Initialize Terraform and deploy the EKS clusters:
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```shell
cd ../spokes
./deploy.sh dev
./deploy.sh staging
./deploy.sh prod
```
Each environment uses a Terraform workspace

To access Terraform output run the following commands for the particular environment
```shell
terraform workspace select dev
terraform output
```
```shell
terraform workspace select staging
terraform output
```
```shell
terraform workspace select prod
terraform output
```

Retrieve `kubectl` config, then execute the output command:
```shell
terraform output -raw configure_kubectl
```

### Verify ArgoCD Cluster Secret for Spoke has the correct IAM Role to be assume by Hub Cluster
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```shell
kubectl get secret -n argocd hub-spoke-dev --template='{{index .data.config | base64decode}}'
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```
Do the same for the other cluster replaced `dev` in `hub-spoke-dev`
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
The output have a section `awsAuthConfig` with the `clusterName` and the `roleARN` that has write access to the spoke cluster
```json
{
"tlsClientConfig": {
"insecure": false,
"caData" : "LS0tL...."
},
"awsAuthConfig" : {
"clusterName": "hub-spoke-dev",
"roleARN": "arn:aws:iam::0123456789:role/hub-spoke-dev-argocd-spoke"
}
}
```


### Verify the Addons on Spoke Clusters
Verify that the addons are ready:
```shell
kubectl get deployment -n kube-system \
metrics-server
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```


### Monitor GitOps Progress for Workloads from Hub Cluster (run on Hub Cluster context)
Watch until **all* the Workloads ArgoCD Applications are `Healthy`
```shell
watch kubectl get -n argocd applications
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```
Wait until the ArgoCD Applications `HEALTH STATUS` is `Healthy`. Crl+C to exit the `watch` command


### Verify the Application
Verify that the application configuration is present and the pod is running:
```shell
kubectl get all -n workload
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```

### Container Metrics
Check the application's CPU and memory metrics:
```shell
kubectl top pods -n workload
csantanapr marked this conversation as resolved.
Show resolved Hide resolved
```

## Destroy the Spoke EKS Clusters
To tear down all the resources and the EKS cluster, run the following command:
```shell
./destroy.sh dev
./destroy.sh staging
./destroy.sh prod
```

## Destroy the Hub EKS Clusters
To tear down all the resources and the EKS cluster, run the following command:
Destroy Hub Clusters
```shell
cd ../hub
./destroy.sh
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: cluster-addons
namespace: argocd
spec:
syncPolicy:
preserveResourcesOnDeletion: true
generators:
- clusters:
selector:
matchExpressions:
- key: akuity.io/argo-cd-cluster-name
operator: NotIn
values: [in-cluster]
template:
metadata:
name: cluster-addons
spec:
project: default
source:
repoURL: '{{metadata.annotations.addons_repo_url}}'
path: '{{metadata.annotations.addons_repo_basepath}}{{metadata.annotations.addons_repo_path}}'
targetRevision: '{{metadata.annotations.addons_repo_revision}}'
directory:
recurse: true
exclude: exclude/*
destination:
namespace: 'argocd'
name: '{{name}}'
syncPolicy:
automated: {}
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: workloads
namespace: argocd
spec:
syncPolicy:
preserveResourcesOnDeletion: false
generators:
- clusters:
selector:
matchExpressions:
- key: akuity.io/argo-cd-cluster-name
operator: NotIn
values: [in-cluster]
- key: environment
operator: NotIn
values: [control-plane]
template:
metadata:
name: 'workload-{{metadata.labels.environment}}'
spec:
project: default
source:
repoURL: '{{metadata.annotations.workload_repo_url}}'
path: '{{metadata.annotations.workload_repo_basepath}}{{metadata.annotations.workload_repo_path}}'
targetRevision: '{{metadata.annotations.workload_repo_revision}}'
destination:
namespace: 'workload'
name: '{{name}}'
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
25 changes: 25 additions & 0 deletions patterns/gitops/multi-cluster-hub-spoke-argocd/hub/destroy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#!/bin/bash

set -uo pipefail

SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ROOTDIR="$(cd ${SCRIPTDIR}/../..; pwd )"
[[ -n "${DEBUG:-}" ]] && set -x

# Delete the Ingress/SVC before removing the addons
TMPFILE=$(mktemp)
terraform -chdir=$SCRIPTDIR output -raw configure_kubectl > "$TMPFILE"
# check if TMPFILE contains the string "No outputs found"
if [[ ! $(cat $TMPFILE) == *"No outputs found"* ]]; then
source "$TMPFILE"
kubectl delete -n argocd applicationset workloads
kubectl delete -n argocd applicationset cluster-addons
kubectl delete -n argocd applicationset addons-argocd
kubectl delete -n argocd svc argo-cd-argocd-server
fi

terraform destroy -target="module.gitops_bridge_bootstrap" -auto-approve
terraform destroy -target="module.eks_blueprints_addons" -auto-approve
terraform destroy -target="module.eks" -auto-approve
terraform destroy -target="module.vpc" -auto-approve
terraform destroy -auto-approve
Loading