Skip to content

Commit

Permalink
blueprint for VPC lattice with kubernetes api controllers
Browse files Browse the repository at this point in the history
  • Loading branch information
rubanracker committed Nov 16, 2023
1 parent d4ca8c4 commit 8f02aa0
Show file tree
Hide file tree
Showing 19 changed files with 15,976 additions and 0 deletions.
82 changes: 82 additions & 0 deletions patterns/blueprint-vpc-lattice/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Application Networking with Amazon VPC Lattice and Amazon EKS

This pattern demonstrates where a service in one EKS cluster communicates with a service in another cluster and VPC, using VPC Lattice. Besides it also shows how service discovery works, with support for using custom domain names for services. It also demonstrates how VPC Lattice enables services in EKS clusters with overlapping CIDRs to communicate with each other without the need for any networking constructs like private NAT Gateways and Transit Gateways.

- [Documentation](https://aws.amazon.com/vpc/lattice/)
- [Launch Blog](https://aws.amazon.com/blogs/containers/amazon-vpc-cni-now-supports-kubernetes-network-policies/)

## Scenario

The solution architecture used to demonstrate cross-cluster connectivity with VPC Lattice is shown in the following diagram. The following are the relevant aspects of this architecture.

1. Two VPCs are setup in the same AWS Region, both using the same RFC 1918 address range 192.168.48.0/20
2. An EKS cluster is provisioned in each of the VPC.
3. An HTTP web service is deployed to the EKS cluster in Cluster1-vpc , exposing a set of REST API endpoints. Another REST API service is deployed to the EKS cluster in Cluster2-vpc and it communicates with an Aurora PostgreSQL database in the same VPC.
AWS Gateway API controller is used in both clusters to manage the Kubernetes Gateway API resources such as Gateway and HTTPRoute. These custom resources orchestrate AWS VPC Lattice resources such as Service Network, Service, and Target Groups that enable communication between the Kubernetes services deployed to the clusters. Please refer to this post for a detailed discussion on how the AWS Gateway API controller extends custom resources defined by Gateway API, allowing you to create VPC Lattice resources using Kubernetes APIs.


![img.png](img/img.png)

## Deploy

See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.

1. set up the first cluster with its own VPC and the second

```shell
# setting up the cluster1
cd cluster1
terraform init
terraform apply

cd ../cluster2
terraform init
terraform apply
```

2. Initialize the aurora postgress database for cluster2 vpc refer [here](./cluster2/postgres-setup/README.md)
3. Initialize Kubernetes secrets for cluster2

```shell
cd cluster2
chmod +x secrets.sh && ./secrets.sh
```
4. Deploy the kubernetes artefacts for cluster2

```shell
export CLUSTER_2=cluster2
export AWS_DEFAULT_REGION=$(aws configure get region)
export AWS_ACCOUNT_NUMBER=$(aws sts get-caller-identity --query "Account" --output text)

aws eks update-kubeconfig --name $CLUSTER_2 --region $AWS_DEFAULT_REGION

export CTX_CLUSTER_2=arn:aws:eks:$AWS_DEFAULT_REGION:${AWS_ACCOUNT_NUMBER}:cluster/$CLUSTER_2


kubectl apply --context="${CTX_CLUSTER_2}" -f ./$CLUSTER_2/gateway-lattice.yaml # GatewayClass and Gateway
kubectl apply --context="${CTX_CLUSTER_2}" -f ./$CLUSTER_2/route-datastore-canary.yaml # HTTPRoute and ClusterIP Services
kubectl apply --context="${CTX_CLUSTER_2}" -f ./$CLUSTER_2/datastore.yaml # Deployment
```

5. Deploy the gateway lattice and the frontend service on cluster1

```shell
export CLUSTER_1=cluster1
export AWS_DEFAULT_REGION=$(aws configure get region)
export AWS_ACCOUNT_NUMBER=$(aws sts get-caller-identity --query "Account" --output text)

aws eks update-kubeconfig --name $CLUSTER_1 --region $AWS_DEFAULT_REGION

export CTX_CLUSTER_1=arn:aws:eks:$AWS_DEFAULT_REGION:${AWS_ACCOUNT_NUMBER}:cluster/$CLUSTER_1


kubectl apply --context="${CTX_CLUSTER_1}" -f ./$CLUSTER_1/gateway-lattice.yaml # GatewayClass and Gateway
kubectl apply --context="${CTX_CLUSTER_1}" -f ./$CLUSTER_1/frontend.yaml
```


## Destroy

```shell
chmod +x ./destroy.sh && ./destroy.sh
```
61 changes: 61 additions & 0 deletions patterns/blueprint-vpc-lattice/cluster1/frontend.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: apps
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: go
image: public.ecr.aws/awsvijisarathy/k8s-frontend:v1
imagePullPolicy: Always
env:
- name: DATASTORE_SERVICE_URL
value: datastore.sarathy.io
ports:
- name: http
containerPort: 3000
protocol: TCP
resources:
requests:
cpu: "50m"
memory: "128Mi"
livenessProbe:
httpGet:
path: /live
port: 3000
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
namespace: apps
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 3000
selector:
app: frontend
22 changes: 22 additions & 0 deletions patterns/blueprint-vpc-lattice/cluster1/gateway-lattice.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: amazon-vpc-lattice
spec:
controllerName: application-networking.k8s.aws/gateway-api-controller

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: eks-lattice-network
spec:
gatewayClassName: amazon-vpc-lattice
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
217 changes: 217 additions & 0 deletions patterns/blueprint-vpc-lattice/cluster1/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,217 @@
provider "aws" {
region = local.region
}

provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}

provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
}

data "aws_availability_zones" "available" {}
data "aws_ecrpublic_authorization_token" "token" {}
data "aws_caller_identity" "identity" {}
data "aws_region" "current" {}

locals {
name = basename(path.cwd)
region = "us-west-2"

vpc_cidr = "192.168.48.0/20"
azs = slice(data.aws_availability_zones.available.names, 0, 3)

tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
}

################################################################################
# Cluster
################################################################################

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.16"

cluster_name = local.name
cluster_version = "1.27" # Must be 1.25 or higher
cluster_endpoint_public_access = true

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

eks_managed_node_groups = {
initial = {
instance_types = ["m5.large"]

min_size = 1
max_size = 2
desired_size = 1
}
}

tags = local.tags
}

################################################################################
# Supporting Resources
################################################################################

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"

name = local.name
cidr = local.vpc_cidr

azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

enable_nat_gateway = true
single_nat_gateway = true

public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}

private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}

tags = local.tags
}

################################################################################
# EKS Addons (demo application)
################################################################################

module "addons" {
source = "aws-ia/eks-blueprints-addons/aws"
version = "~> 1.0"

cluster_name = module.eks.cluster_name
cluster_endpoint = module.eks.cluster_endpoint
cluster_version = module.eks.cluster_version
oidc_provider_arn = module.eks.oidc_provider_arn

# EKS Addons
eks_addons = {
coredns = {}
kube-proxy = {}
vpc-cni = {
preserve = true
most_recent = true # Must be 1.14.0 or higher

timeouts = {
create = "25m"
delete = "10m"
}

}
}
enable_aws_gateway_api_controller = true
aws_gateway_api_controller = {
repository_username = data.aws_ecrpublic_authorization_token.token.user_name
repository_password = data.aws_ecrpublic_authorization_token.token.password
# awsRegion, clusterVpcId, clusterName, awsAccountId are required for case where IMDS is NOT AVAILABLE, e.g Fargate, self-managed clusters with IMDS access blocked
set = [{
name = "clusterVpcId"
value = module.vpc.vpc_id
},
{
name = "clusterName"
value = module.eks.cluster_name
},
{
name = "awsAccountId"
value = local.region
},
{
name = "awsAccountId"
value = data.aws_caller_identity.identity.account_id
},
{
name = "awsRegion"
value = local.region
}
]

}
tags = local.tags
}

data "aws_ec2_managed_prefix_list" "ipv4" {
name = "com.amazonaws.${data.aws_region.current.name}.vpc-lattice"
}

data "aws_ec2_managed_prefix_list" "ipv6" {
name = "com.amazonaws.${data.aws_region.current.name}.ipv6.vpc-lattice"
}


# configure security group to receive traffic from the VPC Lattice network. You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. Lattice has both an IPv4 and IPv6 prefix lists available
resource "aws_security_group_rule" "vpc_lattice_ipv4_ingress" {
description = "VPC lattice ipv4 ingress"
type = "ingress"
security_group_id = module.eks.cluster_security_group_id
from_port = 0
to_port = 0
protocol = "-1"
prefix_list_ids = [data.aws_ec2_managed_prefix_list.ipv4.id]
}

resource "aws_security_group_rule" "vpc_lattice_ipv6_ingress" {
description = "VPC lattice ivp6 ingress"
type = "ingress"
security_group_id = module.eks.cluster_security_group_id
from_port = 0
to_port = 0
protocol = "-1"
prefix_list_ids = [data.aws_ec2_managed_prefix_list.ipv6.id]
}


---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: amazon-vpc-lattice
spec:
controllerName: application-networking.k8s.aws/gateway-api-controller

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: eks-lattice-network
spec:
gatewayClassName: amazon-vpc-lattice
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
Loading

0 comments on commit 8f02aa0

Please sign in to comment.