Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Update Wireguard example to new layout #1761

Merged
merged 3 commits into from
Oct 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions .github/scripts/plan-examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,28 +3,28 @@
import re


def get_examples():
def get_patterns():
"""
Get all Terraform example root directories using their respective `versions.tf`;
Get all pattern root directories using their respective `main.tf`;
returning a string formatted json array of the example directories minus those that are excluded
"""
exclude = {
'examples/appmesh-mtls', # excluded until Rout53 is setup
'examples/blue-green-upgrade/environment',
'examples/blue-green-upgrade/modules/eks_cluster',
'examples/istio-multi-cluster/1.cluster1', # relies on remote state
'examples/istio-multi-cluster/2.cluster2', # relies on remote state
'examples/privatelink-access',
'patterns/appmesh-mtls', # excluded until Rout53 is setup
'patterns/blue-green-upgrade/environment',
'patterns/blue-green-upgrade/modules/eks_cluster',
'patterns/istio-multi-cluster/1.cluster1', # relies on remote state
'patterns/istio-multi-cluster/2.cluster2', # relies on remote state
'patterns/privatelink-access',
}

projects = {
x.replace('/versions.tf', '')
for x in glob.glob('patterns/**/versions.tf', recursive=True)
x.replace('/main.tf', '')
for x in glob.glob('patterns/**/main.tf', recursive=True)
if not re.match(r'^.+/_', x)
}

print(json.dumps(list(projects.difference(exclude))))


if __name__ == '__main__':
get_examples()
get_patterns()
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ repos:
- id: detect-aws-credentials
args: ['--allow-missing-credentials']
- repo: https://github.com/antonbabenko/pre-commit-terraform
rev: v1.83.2
rev: v1.83.4
hooks:
- id: terraform_fmt
- id: terraform_docs
Expand Down
156 changes: 137 additions & 19 deletions patterns/wireguard-with-cilium/README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,73 @@
# Transparent Encryption with Cilium and Wireguard

This pattern demonstrates Cilium configured in CNI chaining mode with VPC CNI and with Wireguard transparent encryption enabled on an Amazon EKS cluster.
This pattern demonstrates Cilium configured in CNI chaining mode with the VPC CNI and with Wireguard transparent encryption enabled on an Amazon EKS cluster.

- [Cilium CNI Chaining Documentation](https://docs.cilium.io/en/v1.12/gettingstarted/cni-chaining-aws-cni/)
- [Cilium Wireguard Encryption Documentation](https://docs.cilium.io/en/v1.12/gettingstarted/encryption-wireguard/)
- [Cilium CNI Chaining Documentation](https://docs.cilium.io/en/stable/installation/cni-chaining-aws-cni/)
- [Cilium Wireguard Encryption Documentation](https://docs.cilium.io/en/stable/security/network/encryption-wireguard/)

## Focal Points

- `eks.tf` contains the cluster configuration and the deployment of Cilium.
- There are no specific requirements from an EKS perspective, other than the Linux Kernel version used by the OS must be 5.10+
- `example.yaml` provides a sample application used to demonstrate the encrypted connectivity. This is optional and not required for the pattern.

## Deploy

See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.

## Validate

1. List the daemonsets
1. Deploy the example pods:

```sh
kubectl get ds -n kube-system
kubectl apply -f example.yaml
```

```text
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
aws-node 2 2 2 2 2 <none> 156m
cilium 2 2 2 2 2 kubernetes.io/os=linux 152m
kube-proxy 2 2 2 2 2 <none> 156m
pod/server created
service/server created
pod/client created
```

2. Open a shell inside the cilium container
2. Get the Cilium status from one of the Cilium pods.

```sh
kubectl -n kube-system exec -ti ds/cilium -- bash
```

3. Verify Encryption is enabled
Under the `Encryption` field, it should state `Wireguard` with a PubKey.
`NodeEncryption: Disabled` is expected since `NodeEncryption` was not enabled
via the Helm values provided.

```sh
cilium status | grep Encryption
kubectl -n kube-system exec -ti ds/cilium -- cilium status
```

```text
Encryption: Wireguard [cilium_wg0 (Pubkey: b2krgbHgaCsVWALMnFLiS/RekhhcE36PXEjQ7T8+mW0=, Port: 51871, Peers: 1)]
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore: Ok Disabled
Kubernetes: Ok 1.28+ (v1.28.1-eks-43840fb) [linux/amd64]
Kubernetes APIs: ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: False [eth0 10.0.21.109 (Direct Routing), eth1 10.0.27.0]
Host firewall: Disabled
CNI Chaining: aws-cni
Cilium: Ok 1.14.2 (v1.14.2-a6748946)
NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 1/254 allocated from 10.0.0.0/24,
IPv4 BIG TCP: Disabled
IPv6 BIG TCP: Disabled
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: Disabled
Controller Status: 24/24 healthy
Proxy Status: No managed proxy redirect
Global Identity Range: min 256, max 65535
Hubble: Ok Current/Max Flows: 410/4095 (10.01%), Flows/s: 1.59 Metrics: Disabled
Encryption: Wireguard [NodeEncryption: Disabled, cilium_wg0 (Pubkey: /yuqsZyG91AzVIkZ3AIq8qjQ0gGKQd6GWcRYh4LYpko=, Port: 51871, Peers: 1)]
Cluster health: Probe disabled
```

3. Open a shell inside the cilium container

```sh
kubectl -n kube-system exec -ti ds/cilium -- bash
```

4. Install [`tcpdump`](https://www.tcpdump.org/)
Expand All @@ -54,15 +84,103 @@ See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started
```

```text
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on cilium_wg0, link-type RAW (Raw IP), snapshot length 262144 bytes
<title>Welcome to nginx!</title>
<h1>Welcome to nginx!</h1>
...
40 packets captured
40 packets received by filter
0 packets dropped by kernel
```

!!! info "Exit"
Exit the container shell by typing `exit` before continuing to next step

6. Deploy the Cilium connectivity resources to check and evaluate connectivity:

```sh
kubectl create ns cilium-test
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.14.1/examples/kubernetes/connectivity-check/connectivity-check.yaml
```

```text
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
deployment.apps/pod-to-a created
deployment.apps/pod-to-external-1111 created
deployment.apps/pod-to-a-denied-cnp created
deployment.apps/pod-to-a-allowed-cnp created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-nodeport created
deployment.apps/pod-to-b-intra-node-nodeport created
service/echo-a created
service/echo-b created
service/echo-b-headless created
service/echo-b-host-headless created
ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created
```

7. View the logs of any of the connectivity tests to view the results:

```sh
kubectl logs <cilium test pod> -n cilium-test
```

```text
\{^_^}/ hi!
Loading /default.json
Done
Resources
http://:8080/private
http://:8080/public
Home
http://:8080
Type s + enter at any time to create a snapshot of the database
Watching...
GET /public 200 7.063 ms - 57
GET /public 200 3.126 ms - 57
GET /public 200 3.039 ms - 57
GET /public 200 2.776 ms - 57
GET /public 200 3.087 ms - 57
GET /public 200 2.781 ms - 57
GET /public 200 2.977 ms - 57
GET /public 200 2.596 ms - 57
GET /public 200 2.991 ms - 57
GET /public 200 2.708 ms - 57
GET /public 200 3.066 ms - 57
GET /public 200 2.616 ms - 57
GET /public 200 2.875 ms - 57
GET /public 200 2.689 ms - 57
GET /public 200 2.800 ms - 57
GET /public 200 2.556 ms - 57
GET /public 200 3.238 ms - 57
GET /public 200 2.538 ms - 57
GET /public 200 2.890 ms - 57
GET /public 200 2.666 ms - 57
GET /public 200 2.729 ms - 57
GET /public 200 2.580 ms - 57
GET /public 200 2.919 ms - 57
GET /public 200 2.630 ms - 57
GET /public 200 2.857 ms - 57
GET /public 200 2.716 ms - 57
GET /public 200 1.693 ms - 57
GET /public 200 2.715 ms - 57
GET /public 200 2.729 ms - 57
GET /public 200 2.655 ms - 57
```

## Destroy

{%
Expand Down
98 changes: 98 additions & 0 deletions patterns/wireguard-with-cilium/eks.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
################################################################################
# Cluster
################################################################################

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.16"

cluster_name = local.name
cluster_version = "1.28"
cluster_endpoint_public_access = true

# EKS Addons
cluster_addons = {
coredns = {}
kube-proxy = {}
vpc-cni = {}
}

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

eks_managed_node_groups = {
initial = {
instance_types = ["m5.large"]
min_size = 1
max_size = 3
desired_size = 2
}
}

# Extend node-to-node security group rules
node_security_group_additional_rules = {
# Cilium Wireguard Port https://github.com/cilium/cilium/blob/main/Documentation/security/network/encryption-wireguard.rst
ingress_cilium_wireguard = {
description = "Allow Cilium Wireguard node to node"
protocol = "udp"
from_port = 51871
to_port = 51871
type = "ingress"
self = true
}
}

tags = local.tags
}

################################################################################
# Kubectl Output
################################################################################

output "configure_kubectl" {
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}"
}

################################################################################
# EKS Blueprints Addons
################################################################################

module "eks_blueprints_addons" {
source = "aws-ia/eks-blueprints-addons/aws"
version = "~> 1.7"

cluster_name = module.eks.cluster_name
cluster_endpoint = module.eks.cluster_endpoint
cluster_version = module.eks.cluster_version
oidc_provider_arn = module.eks.oidc_provider_arn

helm_releases = {
cilium = {
name = "cilium"
chart = "cilium"
version = "1.14.1"
repository = "https://helm.cilium.io/"
description = "Cilium Add-on"
namespace = "kube-system"
create_namespace = false

values = [
<<-EOT
cni:
chainingMode: aws-cni
enableIPv4Masquerade: false
tunnel: disabled
endpointRoutes:
enabled: true
l7Proxy: false
encryption:
enabled: true
type: wireguard
EOT
]
}
}

tags = local.tags
}
54 changes: 54 additions & 0 deletions patterns/wireguard-with-cilium/example.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
blog: wireguard
name: server
spec:
containers:
- name: server
image: nginx
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
blog: wireguard
---
apiVersion: v1
kind: Service
metadata:
name: server
spec:
selector:
name: server
ports:
- port: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 600
---
apiVersion: v1
kind: Pod
metadata:
name: client
labels:
blog: wireguard
name: client
spec:
containers:
- name: client
image: busybox
command: ["watch", "wget", "server"]
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
blog: wireguard
---
Loading