diff --git a/patterns/blueprint-vpc-lattice/README.md b/patterns/blueprint-vpc-lattice/README.md new file mode 100644 index 0000000000..e730629f4b --- /dev/null +++ b/patterns/blueprint-vpc-lattice/README.md @@ -0,0 +1,218 @@ +# Application Networking with Amazon VPC Lattice and Amazon EKS + +This pattern showcases inter-cluster communication within an EKS cluster and across clusters and VPCs using VPC Lattice. It illustrates service discovery and highlights how VPC Lattice facilitates communication between services in EKS clusters with overlapping CIDRs, eliminating the need for networking constructs like private NAT Gateways and Transit Gateways. + +- [Documentation](https://aws.amazon.com/vpc/lattice/) +- [Launch Blog](https://aws.amazon.com/blogs/containers/introducing-aws-gateway-api-controller-for-amazon-vpc-lattice-an-implementation-of-kubernetes-gateway-api/) + +The solution architecture used to demonstrate single/cross-cluster connectivity with VPC Lattice is shown in the following diagram. The following are the relevant aspects of this architecture. + +1. Two VPCs are setup in the same AWS Region, both using the same RFC 1918 address range 192.168.48.0/20 +2. An EKS cluster is provisioned in each of the VPCs. +3. The first part of this section provides an example of setting up of service-to-service communications on a single cluster. The second section extends that example by creating another inventory service on a second cluster on a different VPC, and spreading traffic to that service across the two clusters and VPCs + +![img.png](img/img_1.png) + +## Setup service-to-service communications + +See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern. + +1. set up the first cluster with its own VPC + +```shell + # setting up the cluster1 + cd cluster1 + terraform init + terraform apply +``` +2. Create Kubernetes Gateway `my-hotel` +```shell +aws eks update-kubeconfig --name +kubectl apply -f my-hotel-gateway.yaml # GatewayClass and Gateway +``` +Verify that `my-hotel` Gateway is created with `PROGRAMMED` status equals to `True`: + +```shell +kubectl get gateway + +NAME CLASS ADDRESS PROGRAMMED AGE +my-hotel amazon-vpc-lattice True 7d12h +``` +3. Create the Kubernetes `HTTPRoute` rates that can has path matches routing to the `parking` service and `review` service (this could take about a few minutes) + +```shell +kubectl apply -f parking.yaml +kubectl apply -f review.yaml +kubectl apply -f rate-route-path.yaml +``` +4. Create another Kubernetes `HTTPRoute` inventory (this could take about a few minutes): + +```shell +kubectl apply -f inventory-ver1.yaml +kubectl apply -f inventory-route.yaml +``` +Find out HTTPRoute's DNS name from HTTPRoute status: + +```shell +kubectl get httproute + +NAME HOSTNAMES AGE +inventory 51s +rates 6m11s +``` + +Check VPC Lattice generated DNS address for HTTPRoute `inventory` and `rates`: + +```shell +kubectl get httproute inventory -o yaml + +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + annotations: + application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-02fb06f1acdeb5b55.7d67968.vpc-lattice-svcs.us-west-2.on.aws +... +``` +```shell +kubectl get httproute rates -o yaml + +apiVersion: v1 +items: +- apiVersion: gateway.networking.k8s.io/v1beta1 + kind: HTTPRoute + metadata: + annotations: + application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-0d38139624f20d213.7d67968.vpc-lattice-svcs.us-west-2.on.aws +... +``` + +If the previous step returns the expected response, store VPC Lattice assigned DNS names to variables. + +```shell +ratesFQDN=$(kubectl get httproute rates -o json | jq -r '.metadata.annotations."application-networking.k8s.aws/lattice-assigned-domain-name"') +inventoryFQDN=$(kubectl get httproute inventory -o json | jq -r '.metadata.annotations."application-networking.k8s.aws/lattice-assigned-domain-name"') +``` + +Confirm that the URLs are stored correctly: + +```shell +echo $ratesFQDN $inventoryFQDN +rates-default-034e0056410499722.7d67968.vpc-lattice-svcs.us-west-2.on.aws inventory-default-0c54a5e5a426f92c2.7d67968.vpc-lattice-svcs.us-west-2.on.aws + +``` + +### Verify service-to-service communications + +1. Check connectivity from the `inventory-ver1` service to `parking` and `review` services: + +```shell +kubectl exec deploy/inventory-ver1 -- curl $ratesFQDN/parking $ratesFQDN/review + +Requsting to Pod(parking-8548d7f98d-57whb): parking handler pod +Requsting to Pod(review-6df847686d-dhzwc): review handler pod +``` +2. Check connectivity from the `parking` service to the `inventory-ver1` service: +```shell +kubectl exec deploy/parking -- curl $inventoryFQDN +Requsting to Pod(inventory-ver1-99d48958c-whr2q): Inventory-ver1 handler pod +``` + +Now you could confirm the service-to-service communications within one cluster is working as expected. + +## Set up multi-cluster/multi-VPC service-to-service communications + +![img.png](img/img_2.png) + +1. set up the second cluster with its own VPC + +```shell + # setting up the cluster1 + cd ../cluster2 + terraform init + terraform apply +``` + +2. Create a Kubernetes inventory-ver2 service in the second cluster: + +```shell +aws eks update-kubeconfig --name +kubectl apply -f inventory-ver2.yaml +``` +3. Export this Kubernetes inventory-ver2 from the second cluster, so that it can be referenced by HTTPRoute in the first cluster: + +```shell +kubectl apply -f inventory-ver2-export.yaml +``` + +## Switch back to the first cluster + +1. Switch context back to the first cluster +```shell +cd ../cluster1/ +kubectl config use-context +``` + +2. Create Kubernetes ServiceImport `inventory-ver2` in the first cluster: + +```shell +kubectl apply -f inventory-ver2-import.yaml +``` +3. Update the HTTPRoute inventory rules to route 10% traffic to the first cluster and 90% traffic to the second cluster: +```shell +kubectl apply -f inventory-route-bluegreen.yaml +``` +4. Check the service-to-service connectivity from `parking`(in cluster1) to `inventory-ver1`(in cluster1) and `inventory-ver2`(in cluster2): +```shell +kubectl exec deploy/parking -- sh -c 'for ((i=1; i<=30; i++)); do curl "$0"; done' "$inventoryFQDN" + +Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod <----> in 2nd cluster +Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod +Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod +Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod +Requsting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver1 handler pod <----> in 1st cluster +Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod +Requsting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod +Requsting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod +Requsting to Pod(inventory-ver1-74fc59977-wg8br): Inventory-ver1 handler pod.... + +``` +You can see that the traffic is distributed between inventory-ver1 and inventory-ver2 as expected. + +## Destroy +Before tearing down resources via terraform make sure to delete the custom resources created for the deployments this will tear down all the aws VPC lattice resources such as services , target groups so on . + +```shell +aws eks update-kubeconfig --name +kubectl delete -f inventory-ver2.yaml +kubectl delete -f inventory-ver2-export.yaml + +aws eks update-kubeconfig --name +kubectl delete -f inventory-route-bluegreen.yaml +kubectl delete -f inventory-ver2-import.yaml +kubectl delete -f inventory-ver1.yaml +kubectl delete -f inventory-route.yaml +kubectl delete -f parking.yaml +kubectl delete -f review.yaml +kubectl delete -f rate-route-path.yam +``` +further you would have to disassociate the VPCs from the service network since destroying terraform managed helm chart addon would not do it for you . + +```shell +aws vpc-lattice delete-service-network-vpc-association --service-network-vpc-association-identifier +aws vpc-lattice delete-service-network-vpc-association --service-network-vpc-association-identifier + +# delete the helm chart created service network +aws vpc-lattice delete-service-network --service-network-identifier +``` + +To teardown and remove the resources created in this example: + +```shell +cd cluster1 +terraform apply -destroy -autoapprove +cd ../cluster2 +terraform apply -destroy -autoapprove +``` + + + diff --git a/patterns/blueprint-vpc-lattice/cluster1/gatewayclass.yaml b/patterns/blueprint-vpc-lattice/cluster1/gatewayclass.yaml new file mode 100644 index 0000000000..23f16a9ef0 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/gatewayclass.yaml @@ -0,0 +1,6 @@ +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: GatewayClass +metadata: + name: amazon-vpc-lattice +spec: + controllerName: application-networking.k8s.aws/gateway-api-controller \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/cluster1/inventory-route-bluegreen.yaml b/patterns/blueprint-vpc-lattice/cluster1/inventory-route-bluegreen.yaml new file mode 100644 index 0000000000..51f9de3b57 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/inventory-route-bluegreen.yaml @@ -0,0 +1,17 @@ +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + name: inventory +spec: + parentRefs: + - name: my-hotel + sectionName: http + rules: + - backendRefs: + - name: inventory-ver1 + kind: Service + port: 80 + weight: 10 + - name: inventory-ver2 + kind: ServiceImport + weight: 90 diff --git a/patterns/blueprint-vpc-lattice/cluster1/inventory-route.yaml b/patterns/blueprint-vpc-lattice/cluster1/inventory-route.yaml new file mode 100644 index 0000000000..3363109e19 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/inventory-route.yaml @@ -0,0 +1,14 @@ +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + name: inventory +spec: + parentRefs: + - name: my-hotel + sectionName: http + rules: + - backendRefs: + - name: inventory-ver1 + kind: Service + port: 80 + weight: 10 diff --git a/patterns/blueprint-vpc-lattice/cluster1/inventory-ver1.yaml b/patterns/blueprint-vpc-lattice/cluster1/inventory-ver1.yaml new file mode 100644 index 0000000000..b9778fc7c0 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/inventory-ver1.yaml @@ -0,0 +1,36 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: inventory-ver1 + labels: + app: inventory-ver1 +spec: + replicas: 2 + selector: + matchLabels: + app: inventory-ver1 + template: + metadata: + labels: + app: inventory-ver1 + spec: + containers: + - name: inventory-ver1 + image: public.ecr.aws/x2j8p8w7/http-server:latest + env: + - name: PodName + value: "Inventory-ver1 handler pod" + + +--- +apiVersion: v1 +kind: Service +metadata: + name: inventory-ver1 +spec: + selector: + app: inventory-ver1 + ports: + - protocol: TCP + port: 80 + targetPort: 8090 diff --git a/patterns/blueprint-vpc-lattice/cluster1/inventory-ver2-import.yaml b/patterns/blueprint-vpc-lattice/cluster1/inventory-ver2-import.yaml new file mode 100644 index 0000000000..8217c69a96 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/inventory-ver2-import.yaml @@ -0,0 +1,12 @@ +apiVersion: application-networking.k8s.aws/v1alpha1 +kind: ServiceImport +metadata: + name: inventory-ver2 + annotations: + application-networking.k8s.aws/aws-vpc: "your-vpc-id" + application-networking.k8s.aws/aws-eks-cluster-name: "lattice-eks-test-2" +spec: + type: ClusterSetIP + ports: + - port: 80 + protocol: TCP diff --git a/patterns/blueprint-vpc-lattice/cluster1/main.tf b/patterns/blueprint-vpc-lattice/cluster1/main.tf new file mode 100755 index 0000000000..51a05a1f5a --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/main.tf @@ -0,0 +1,178 @@ +provider "aws" { + region = local.region +} + +provider "aws" { + region = "us-east-1" + alias = "virginia" +} + +provider "kubernetes" { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + + exec { + api_version = "client.authentication.k8s.io/v1beta1" + command = "aws" + # This requires the awscli to be installed locally where Terraform is executed + args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] + } +} + +provider "helm" { + kubernetes { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + + exec { + api_version = "client.authentication.k8s.io/v1beta1" + command = "aws" + # This requires the awscli to be installed locally where Terraform is executed + args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] + } + } +} + +data "aws_availability_zones" "available" {} +data "aws_ecrpublic_authorization_token" "token" { + provider = aws.virginia +} +data "aws_caller_identity" "identity" {} + +locals { + name = basename(path.cwd) + region = var.region + + vpc_cidr = "192.168.48.0/20" + azs = slice(data.aws_availability_zones.available.names, 0, 3) + + tags = { + Blueprint = local.name + GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints" + } +} + +################################################################################ +# Cluster +################################################################################ + +module "eks" { + source = "terraform-aws-modules/eks/aws" + version = "~> 19.16" + + cluster_name = local.name + cluster_version = "1.27" # Must be 1.25 or higher + cluster_endpoint_public_access = true + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnets + + eks_managed_node_groups = { + initial = { + instance_types = ["m5.large"] + min_size = 1 + max_size = 2 + desired_size = 1 + } + } + + tags = local.tags +} + +################################################################################ +# Supporting Resources +################################################################################ + +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + version = "~> 5.0" + + name = local.name + cidr = local.vpc_cidr + + azs = local.azs + private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)] + public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)] + + enable_nat_gateway = true + single_nat_gateway = true + + public_subnet_tags = { + "kubernetes.io/role/elb" = 1 + } + + private_subnet_tags = { + "kubernetes.io/role/internal-elb" = 1 + } + + tags = local.tags +} + +################################################################################ +# EKS Addons (demo application) +################################################################################ + +module "addons" { + source = "aws-ia/eks-blueprints-addons/aws" + version = "~> 1.12.0" + + + cluster_name = module.eks.cluster_name + cluster_endpoint = module.eks.cluster_endpoint + cluster_version = module.eks.cluster_version + oidc_provider_arn = module.eks.oidc_provider_arn + + # EKS Addons + eks_addons = { + coredns = {} + kube-proxy = {} + vpc-cni = { + preserve = true + most_recent = true # Must be 1.14.0 or higher + + timeouts = { + create = "25m" + delete = "10m" + } + + } + } + enable_aws_gateway_api_controller = true + aws_gateway_api_controller = { + repository_username = data.aws_ecrpublic_authorization_token.token.user_name + repository_password = data.aws_ecrpublic_authorization_token.token.password + chart_version = "v1.0.1" + # awsRegion, clusterVpcId, clusterName, awsAccountId are required for case where IMDS is NOT AVAILABLE, e.g Fargate, self-managed clusters with IMDS access blocked + set = [{ + name = "clusterVpcId" + value = module.vpc.vpc_id + }, + { + name = "clusterName" + value = module.eks.cluster_name + }, + { + name = "defaultServiceNetwork" + value = "my-hotel" + } + ] + + } + tags = local.tags +} + +data "aws_ec2_managed_prefix_list" "ipv4" { + name = "com.amazonaws.${local.region}.vpc-lattice" +} + + +# configure security group to receive traffic from the VPC Lattice network. You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. Lattice has both an IPv4 and IPv6 prefix lists available +resource "aws_security_group_rule" "vpc_lattice_ipv4_ingress" { + description = "VPC lattice ipv4 ingress" + type = "ingress" + security_group_id = module.eks.cluster_security_group_id + from_port = 0 + to_port = 0 + protocol = "-1" + prefix_list_ids = [data.aws_ec2_managed_prefix_list.ipv4.id] +} \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/cluster1/my-gateway-hotel.yml b/patterns/blueprint-vpc-lattice/cluster1/my-gateway-hotel.yml new file mode 100644 index 0000000000..764337ca08 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/my-gateway-hotel.yml @@ -0,0 +1,22 @@ +--- +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: GatewayClass +metadata: + name: amazon-vpc-lattice +spec: + controllerName: application-networking.k8s.aws/gateway-api-controller + +--- +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: Gateway +metadata: + name: my-hotel +spec: + gatewayClassName: amazon-vpc-lattice + listeners: + - name: http + protocol: HTTP + port: 80 + allowedRoutes: + namespaces: + from: All \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/cluster1/outputs.tf b/patterns/blueprint-vpc-lattice/cluster1/outputs.tf new file mode 100644 index 0000000000..42ce6f201d --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/outputs.tf @@ -0,0 +1,4 @@ +output "configure_kubectl" { + description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" + value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}" +} \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/cluster1/parking.yaml b/patterns/blueprint-vpc-lattice/cluster1/parking.yaml new file mode 100644 index 0000000000..6b11bd6c6f --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/parking.yaml @@ -0,0 +1,36 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: parking + labels: + app: parking +spec: + replicas: 2 + selector: + matchLabels: + app: parking + template: + metadata: + labels: + app: parking + spec: + containers: + - name: parking + image: public.ecr.aws/x2j8p8w7/http-server:latest + env: + - name: PodName + value: "parking handler pod" + + +--- +apiVersion: v1 +kind: Service +metadata: + name: parking +spec: + selector: + app: parking + ports: + - protocol: TCP + port: 80 + targetPort: 8090 diff --git a/patterns/blueprint-vpc-lattice/cluster1/rate-route-path.yaml b/patterns/blueprint-vpc-lattice/cluster1/rate-route-path.yaml new file mode 100644 index 0000000000..facba543cd --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/rate-route-path.yaml @@ -0,0 +1,25 @@ +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + name: rates +spec: + parentRefs: + - name: my-hotel + sectionName: http + rules: + - backendRefs: + - name: parking + kind: Service + port: 80 + matches: + - path: + type: PathPrefix + value: /parking + - backendRefs: + - name: review + kind: Service + port: 80 + matches: + - path: + type: PathPrefix + value: /review diff --git a/patterns/blueprint-vpc-lattice/cluster1/review.yaml b/patterns/blueprint-vpc-lattice/cluster1/review.yaml new file mode 100644 index 0000000000..bd0fd461ab --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/review.yaml @@ -0,0 +1,36 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: review + labels: + app: review +spec: + replicas: 2 + selector: + matchLabels: + app: review + template: + metadata: + labels: + app: review + spec: + containers: + - name: aug24-review + image: public.ecr.aws/x2j8p8w7/http-server:latest + env: + - name: PodName + value: "review handler pod" + + +--- +apiVersion: v1 +kind: Service +metadata: + name: review +spec: + selector: + app: review + ports: + - protocol: TCP + port: 80 + targetPort: 8090 diff --git a/patterns/blueprint-vpc-lattice/cluster1/variable.tf b/patterns/blueprint-vpc-lattice/cluster1/variable.tf new file mode 100644 index 0000000000..592b156013 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/variable.tf @@ -0,0 +1,5 @@ +variable "region" { + description = "aws region in which the resources will be deployed" + type = string + default = "ap-southeast-2" +} \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/cluster1/versions.tf b/patterns/blueprint-vpc-lattice/cluster1/versions.tf new file mode 100644 index 0000000000..ac27b0119e --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster1/versions.tf @@ -0,0 +1,18 @@ +terraform { + required_version = ">= 1.0" + + required_providers { + aws = { + source = "hashicorp/aws" + version = ">= 4.47" + } + helm = { + source = "hashicorp/helm" + version = ">= 2.9" + } + kubernetes = { + source = "hashicorp/kubernetes" + version = ">= 2.20" + } + } +} \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/cluster2/inventory-ver2-export.yaml b/patterns/blueprint-vpc-lattice/cluster2/inventory-ver2-export.yaml new file mode 100644 index 0000000000..443e01c323 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster2/inventory-ver2-export.yaml @@ -0,0 +1,6 @@ +apiVersion: application-networking.k8s.aws/v1alpha1 +kind: ServiceExport +metadata: + name: inventory-ver2 + annotations: + application-networking.k8s.aws/federation: "amazon-vpc-lattice" diff --git a/patterns/blueprint-vpc-lattice/cluster2/inventory-ver2.yaml b/patterns/blueprint-vpc-lattice/cluster2/inventory-ver2.yaml new file mode 100644 index 0000000000..1e721a423c --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster2/inventory-ver2.yaml @@ -0,0 +1,36 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: inventory-ver2 + labels: + app: inventory-ver2 +spec: + replicas: 2 + selector: + matchLabels: + app: inventory-ver2 + template: + metadata: + labels: + app: inventory-ver2 + spec: + containers: + - name: inventory-ver2 + image: public.ecr.aws/x2j8p8w7/http-server:latest + env: + - name: PodName + value: "Inventory-ver2 handler pod" + + +--- +apiVersion: v1 +kind: Service +metadata: + name: inventory-ver2 +spec: + selector: + app: inventory-ver2 + ports: + - protocol: TCP + port: 80 + targetPort: 8090 diff --git a/patterns/blueprint-vpc-lattice/cluster2/main.tf b/patterns/blueprint-vpc-lattice/cluster2/main.tf new file mode 100644 index 0000000000..372636be01 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster2/main.tf @@ -0,0 +1,180 @@ +provider "aws" { + region = local.region +} + +provider "aws" { + region = "us-east-1" + alias = "virginia" +} + +provider "kubernetes" { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + + exec { + api_version = "client.authentication.k8s.io/v1beta1" + command = "aws" + # This requires the awscli to be installed locally where Terraform is executed + args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] + } +} + +provider "helm" { + kubernetes { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + + exec { + api_version = "client.authentication.k8s.io/v1beta1" + command = "aws" + # This requires the awscli to be installed locally where Terraform is executed + args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] + } + } +} + +data "aws_availability_zones" "available" {} +data "aws_ecrpublic_authorization_token" "token" { + provider = aws.virginia +} + +locals { + name = basename(path.cwd) + region = var.region + + vpc_cidr = "192.168.48.0/20" + azs = slice(data.aws_availability_zones.available.names, 0, 3) + + tags = { + Blueprint = local.name + GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints" + } +} + +################################################################################ +# Cluster +################################################################################ + +module "eks" { + source = "terraform-aws-modules/eks/aws" + version = "~> 19.16" + + cluster_name = local.name + cluster_version = "1.27" # Must be 1.25 or higher + cluster_endpoint_public_access = true + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnets + + eks_managed_node_groups = { + initial = { + instance_types = ["m5.large"] + + min_size = 1 + max_size = 2 + desired_size = 1 + } + } + + tags = local.tags +} + +################################################################################ +# Supporting Resources +################################################################################ + +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + version = "~> 5.0" + + name = local.name + cidr = local.vpc_cidr + + azs = local.azs + private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)] + public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)] + + enable_nat_gateway = true + single_nat_gateway = true + + public_subnet_tags = { + "kubernetes.io/role/elb" = 1 + } + + private_subnet_tags = { + "kubernetes.io/role/internal-elb" = 1 + } + + tags = local.tags +} + +################################################################################ +# EKS Addons (demo application) +################################################################################ + +module "addons" { + source = "aws-ia/eks-blueprints-addons/aws" + version = "~> 1.12.0" + + cluster_name = module.eks.cluster_name + cluster_endpoint = module.eks.cluster_endpoint + cluster_version = module.eks.cluster_version + oidc_provider_arn = module.eks.oidc_provider_arn + + # EKS Addons + eks_addons = { + coredns = {} + kube-proxy = {} + vpc-cni = { + preserve = true + most_recent = true # Must be 1.14.0 or higher + + timeouts = { + create = "25m" + delete = "10m" + } + + } + } + enable_aws_gateway_api_controller = true + aws_gateway_api_controller = { + + chart_version = "v1.0.1" + repository = "oci://public.ecr.aws/aws-application-networking-k8s" + + repository_username = data.aws_ecrpublic_authorization_token.token.user_name + repository_password = data.aws_ecrpublic_authorization_token.token.password + # awsRegion, clusterVpcId, clusterName, awsAccountId are required for case where IMDS is NOT AVAILABLE, e.g Fargate, self-managed clusters with IMDS access blocked + set = [{ + name = "clusterVpcId" + value = module.vpc.vpc_id + }, + { + name = "clusterName" + value = module.eks.cluster_name + }, + { + name = "defaultServiceNetwork" + value = "my-hotel" + } + ] + + } + tags = local.tags +} + +data "aws_ec2_managed_prefix_list" "ipv4" { + name = "com.amazonaws.${local.region}.vpc-lattice" +} + + +# configure security group to receive traffic from the VPC Lattice network. You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. Lattice has both an IPv4 and IPv6 prefix lists available +resource "aws_security_group_rule" "vpc_lattice_ipv4_ingress" { + description = "VPC lattice ipv4 ingress" + type = "ingress" + security_group_id = module.eks.cluster_security_group_id + from_port = 0 + to_port = 0 + protocol = "-1" + prefix_list_ids = [data.aws_ec2_managed_prefix_list.ipv4.id] +} diff --git a/patterns/blueprint-vpc-lattice/cluster2/outputs.tf b/patterns/blueprint-vpc-lattice/cluster2/outputs.tf new file mode 100644 index 0000000000..42ce6f201d --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster2/outputs.tf @@ -0,0 +1,4 @@ +output "configure_kubectl" { + description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" + value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}" +} \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/cluster2/variable.tf b/patterns/blueprint-vpc-lattice/cluster2/variable.tf new file mode 100644 index 0000000000..592b156013 --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster2/variable.tf @@ -0,0 +1,5 @@ +variable "region" { + description = "aws region in which the resources will be deployed" + type = string + default = "ap-southeast-2" +} \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/cluster2/versions.tf b/patterns/blueprint-vpc-lattice/cluster2/versions.tf new file mode 100644 index 0000000000..ac27b0119e --- /dev/null +++ b/patterns/blueprint-vpc-lattice/cluster2/versions.tf @@ -0,0 +1,18 @@ +terraform { + required_version = ">= 1.0" + + required_providers { + aws = { + source = "hashicorp/aws" + version = ">= 4.47" + } + helm = { + source = "hashicorp/helm" + version = ">= 2.9" + } + kubernetes = { + source = "hashicorp/kubernetes" + version = ">= 2.20" + } + } +} \ No newline at end of file diff --git a/patterns/blueprint-vpc-lattice/img/img_1.png b/patterns/blueprint-vpc-lattice/img/img_1.png new file mode 100644 index 0000000000..320ea70f71 Binary files /dev/null and b/patterns/blueprint-vpc-lattice/img/img_1.png differ diff --git a/patterns/blueprint-vpc-lattice/img/img_2.png b/patterns/blueprint-vpc-lattice/img/img_2.png new file mode 100644 index 0000000000..7cbfd49d70 Binary files /dev/null and b/patterns/blueprint-vpc-lattice/img/img_2.png differ