diff --git a/examples/karpenter-fargate/README.md b/examples/karpenter-fargate/README.md new file mode 100644 index 0000000000..413bbcb04d --- /dev/null +++ b/examples/karpenter-fargate/README.md @@ -0,0 +1,232 @@ +# EKS Cluster with Karpenter running on Fargate + +Karpenter is an open-source node provisioning project built for Kubernetes. Karpenter automatically launches just the right compute resources to handle your cluster's applications. It is designed to let you take full advantage of the cloud with fast and simple compute provisioning for Kubernetes clusters. + +This example shows how to deploy and leverage Karpenter for Autoscaling and automatic nodes updating. Karpenter and other Add-ons are deployed on Fargate to avoid using any node groups. The following resources will be deployed by this example. + +- VPC, 3 Private Subnets and 3 Public Subnets. +- Internet gateway for Public Subnets and NAT Gateway for Private Subnets. +- AWS EKS Cluster (control plane). +- AWS EKS Fargate Profiles for the `kube-system` namespace with Pod Labels used by the `coredns`, `karpenter`, and `aws-load-balancer-controller` addons, while additional profiles can be added as needed. +- AWS EKS managed addons `vpc-cni` and `kube-proxy`. +- Karpenter Helm Chart. +- AWS SQS Queue to enable interruption handling to gracefully cordon and drain your spot nodes when they are interrupted. Pods that require checkpointing or other forms of graceful draining, requiring the 2-mins before shutdown, will need this. +- A default Karpenter Provisioner that uses the Bottlerocket AMI and refreshes nodes every 24 hours. +- Self-managed CoreDNS addon deployed through a Helm chart. The default CoreDNS deployment provided by AWS EKS is removed and replaced with a self-managed CoreDNS deployment, while the `kube-dns` service is updated to allow Helm to assume control. +- AWS Load Balancer Controller add-on deployed through a Helm chart. The default AWS Load Balancer Controller add-on configuration is overridden so that it can be deployed on Fargate compute. +- The [game-2048](examples/karpenter-fargate/provisioners/sample_deployment.yaml) application is provided to demonstrates how Karpenter scales nodes based on workload constraints like nodeSelector, topologySpreadConstraints, and podAntiAffinity. + +⚠️ The management of CoreDNS as demonstrated in this example is intended to be used on new clusters. Existing clusters with existing workloads will see downtime if the CoreDNS deployment is modified as shown here. + +## How to Deploy + +### Prerequisites + +Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply + +1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) +2. [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) +3. [kubectl](https://Kubernetes.io/docs/tasks/tools/) +4. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) + +### Deployment Steps + +#### Step 1: Clone the repo using the command below + +```sh +git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git +``` + +#### Step 2: Run Terraform INIT + +To initialize a working directory with configuration files + +```sh +cd examples/karpenter-fargate/ +terraform init +``` + +#### Step 3: Run Terraform PLAN for the SQS Queue + +To verify the resources created by this execution + +```sh +terraform plan -target aws_sqs_queue.karpenter_interruption_queue +``` + +#### Step 4: Run Terraform APPLY for the SQS Queue + +```shell +terraform apply -target aws_sqs_queue.karpenter_interruption_queue +``` + +Enter `yes` to apply. + +#### Step 5: Run Terraform PLAN for everything + +To verify the resources created by this execution + +```sh +terraform plan +``` + +#### Step 6: Finally, Terraform APPLY for everything + +```shell +terraform apply +``` + +Enter `yes` to apply. + +### Configure kubectl and test cluster + +EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster. This following command used to update the `kubeconfig` in your local machine where you run kubectl commands to interact with your EKS Cluster. + +#### Step 5: Run update-kubeconfig command. + +`~/.kube/config` file gets updated with cluster details and certificate from the below command + +```shell +aws eks --region us-west-2 update-kubeconfig --name karpenter-fargate +``` + +#### Step 6: List all the worker nodes by running the command below + +You should see a multiple fargate nodes and one node provisioned by Karpenter up and running + +```shell +kubectl get nodes + +# Output should look like below +NAME STATUS ROLES AGE VERSION +fargate-ip-10-0-10-59.us-west-2.compute.internal Ready 106s v1.23.12-eks-1558457 +fargate-ip-10-0-12-102.us-west-2.compute.internal Ready 114s v1.23.12-eks-1558457 +fargate-ip-10-0-12-138.us-west-2.compute.internal Ready 2m5s v1.23.12-eks-1558457 +fargate-ip-10-0-12-148.us-west-2.compute.internal Ready 53s v1.23.12-eks-1558457 +fargate-ip-10-0-12-187.us-west-2.compute.internal Ready 109s v1.23.12-eks-1558457 +fargate-ip-10-0-12-188.us-west-2.compute.internal Ready 15s v1.23.12-eks-1558457 +fargate-ip-10-0-12-54.us-west-2.compute.internal Ready 113s v1.23.12-eks-1558457 +``` + +#### Step 7: List all the pods running in karpenter namespace + +```shell +kubectl get pods -n karpenter + +# Output should look like below +NAME READY STATUS RESTARTS AGE +karpenter-cc495bbd6-kclbd 2/2 Running 0 1m +karpenter-cc495bbd6-x6t5m 2/2 Running 0 1m + +# Get the sqs queue arn from the karpenter configmap +kubectl get configmap karpenter-global-settings \ + -o=jsonpath="{.data.aws\.interruptionQueueName}" \ + -n karpenter +``` + +#### Step 8: List the karpenter provisioner deployed + +```shell +kubectl get provisioners + +# Output should look like below +NAME AGE +default 1m +``` + +#### Step 9: Deploy workload on Karpenter provisioners + +Terraform has configured 1 `default` provisioner and we have 1 deployment example to be deployed using this provisioner. + +Deploy sample workload on `default` provisioner: + +```shell +kubectl apply -f provisioners/sample_deployment.yaml +``` + +> **Warning** +> Because of known limitations with topology spread, the pods might not evenly spread through availability zones. +> https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#known-limitations + +You can run this command to view the Karpenter Controller logs while the nodes are provisioned. + +```shell +kubectl logs --selector app.kubernetes.io/name=karpenter -n karpenter +``` + +After a couple of minutes, you should see new nodes being added by Karpenter to accommodate the game-2048 application EC2 instance family, capacity type, availability zones placement, and pod anti-affinity requirements. + +```shell +kubectl get node \ + --selector=type=karpenter \ + -L karpenter.sh/provisioner-name \ + -L topology.kubernetes.io/zone \ + -L karpenter.sh/capacity-type \ + -L karpenter.k8s.aws/instance-family + +# Output should look like below +NAME STATUS ROLES AGE VERSION PROVISIONER-NAME ZONE CAPACITY-TYPE INSTANCE-FAMILY +ip-10-0-10-47.us-west-2.compute.internal Ready 73s v1.23.13-eks-6022eca default us-west-2a spot c5d +ip-10-0-11-132.us-west-2.compute.internal Ready 72s v1.23.13-eks-6022eca default us-west-2b spot c5 +ip-10-0-11-161.us-west-2.compute.internal Ready 72s v1.23.13-eks-6022eca default us-west-2b spot c6id +ip-10-0-11-163.us-west-2.compute.internal Ready 72s v1.23.13-eks-6022eca default us-west-2b spot c6in +ip-10-0-12-12.us-west-2.compute.internal Ready 73s v1.23.13-eks-6022eca default us-west-2c spot c5d +``` + +Test by listing the game-2048 pods. You should see that all the pods are running on different nodes because of the pod anti-affinity rule. + +```shell +kubectl get pods -o wide + +# Output should look like below +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +deployment-2048-758d5bfc75-gm97g 1/1 Running 0 2m16s 10.0.11.221 ip-10-0-11-161.us-west-2.compute.internal +deployment-2048-758d5bfc75-p9k4m 1/1 Running 0 2m16s 10.0.11.32 ip-10-0-11-132.us-west-2.compute.internal +deployment-2048-758d5bfc75-r48vx 1/1 Running 0 2m16s 10.0.12.144 ip-10-0-12-12.us-west-2.compute.internal +deployment-2048-758d5bfc75-vjxg6 1/1 Running 0 2m16s 10.0.11.11 ip-10-0-11-163.us-west-2.compute.internal +deployment-2048-758d5bfc75-vkpfc 1/1 Running 0 2m16s 10.0.10.111 ip-10-0-10-47.us-west-2.compute.internal +``` + +Test that the sample application is now available. + +```shell +kubectl get ingress/ingress-2048 + +# Output should look like this +NAME CLASS HOSTS ADDRESS PORTS AGE +ingress-2048 alb * k8s-default-ingress2-97b28f4dd2-1471347110.us-west-2.elb.amazonaws.com 80 2m53s +``` + +Open the browser to access the application via the ALB address http://k8s-default-ingress2-97b28f4dd2-1471347110.us-west-2.elb.amazonaws.com/ + +⚠️ You might need to wait a few minutes, and then refresh your browser. + +We now have : + +- 7 Fargate instances +- 5 instances from the default Karpenter provisioner + +## How to Destroy + +NOTE: Make sure you delete all the deployments which clean up the nodes spun up by Karpenter Autoscaler +Ensure no nodes are running created by Karpenter before running the `Terraform Destroy`. Otherwise, EKS Cluster will be cleaned up however this may leave some nodes running in EC2. + +To clean up your environment, delete the sample workload and then destroy the Terraform modules in reverse order. + +Delete the sample workload on `default` provisioner: + +```shell +kubectl delete -f provisioners/sample_deployment.yaml +``` + +Destroy the Karpenter Provisioner and IAM Role, Kubernetes Add-ons, EKS cluster with Node groups and VPC + +```shell +terraform destroy -target="kubectl_manifest.karpenter_provisioner" -auto-approve +# Wait for 1-2 minutes to allow Karpenter to delete the empty nodes +terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve +terraform destroy -target="module.eks_blueprints" -auto-approve +terraform destroy -target="module.vpc" -auto-approve +terraform destroy -target="aws_iam_role.karpenter" -auto-approve +terraform destroy -target="aws_sqs_queue.karpenter_interruption_queue" -auto-approve +``` diff --git a/examples/karpenter-fargate/main.tf b/examples/karpenter-fargate/main.tf new file mode 100644 index 0000000000..9d252f12d4 --- /dev/null +++ b/examples/karpenter-fargate/main.tf @@ -0,0 +1,315 @@ +provider "aws" { + region = local.region +} + +provider "kubernetes" { + host = module.eks_blueprints.eks_cluster_endpoint + cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data) + token = data.aws_eks_cluster_auth.this.token +} + +provider "helm" { + kubernetes { + host = module.eks_blueprints.eks_cluster_endpoint + cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data) + token = data.aws_eks_cluster_auth.this.token + } +} + +provider "kubectl" { + apply_retry_count = 10 + host = module.eks_blueprints.eks_cluster_endpoint + cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data) + load_config_file = false + token = data.aws_eks_cluster_auth.this.token +} + +data "aws_eks_cluster_auth" "this" { + name = module.eks_blueprints.eks_cluster_id +} + +data "aws_availability_zones" "available" {} + +locals { + name = basename(path.cwd) + region = "us-west-2" + + vpc_cidr = "10.0.0.0/16" + azs = slice(data.aws_availability_zones.available.names, 0, 3) + + tags = { + Blueprint = local.name + GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints" + } +} + +#tfsec:ignore:aws-sqs-enable-queue-encryption +resource "aws_sqs_queue" "karpenter_interruption_queue" { + name_prefix = "karpenter" + message_retention_seconds = "300" + sqs_managed_sse_enabled = true + tags = local.tags +} + +#--------------------------------------------------------------- +# EKS Blueprints +#--------------------------------------------------------------- + +module "eks_blueprints" { + source = "../.." + + cluster_name = local.name + cluster_version = "1.23" + + vpc_id = module.vpc.vpc_id + private_subnet_ids = module.vpc.private_subnets + + #----------------------------------------------------------------------------------------------------------# + # Security groups used in this module created by the upstream modules terraform-aws-eks (https://github.com/terraform-aws-modules/terraform-aws-eks). + # Upstream module implemented Security groups based on the best practices doc https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html. + # So, by default the security groups are restrictive. Users needs to enable rules for specific ports required for App requirement or Add-ons + # See the notes below for each rule used in these examples + #----------------------------------------------------------------------------------------------------------# + node_security_group_additional_rules = { + # Extend node-to-node security group rules. Recommended and required for the Add-ons + ingress_self_all = { + description = "Node to node all ports/protocols" + protocol = "-1" + from_port = 0 + to_port = 0 + type = "ingress" + self = true + } + # Recommended outbound traffic for Node groups + egress_all = { + description = "Node all egress" + protocol = "-1" + from_port = 0 + to_port = 0 + type = "egress" + cidr_blocks = ["0.0.0.0/0"] + ipv6_cidr_blocks = ["::/0"] + } + + # Allows Control Plane Nodes to talk to Worker nodes on Karpenter ports. + # This can be extended further to specific port based on the requirement for others Add-on e.g., metrics-server 4443, spark-operator 8080, etc. + # Change this according to your security requirements if needed + ingress_nodes_karpenter_port = { + description = "Cluster API to Nodegroup for Karpenter" + protocol = "tcp" + from_port = 8443 + to_port = 8443 + type = "ingress" + source_cluster_security_group = true + } + } + + # Add karpenter.sh/discovery tag so that we can use this as securityGroupSelector in karpenter provisioner + node_security_group_tags = { + "karpenter.sh/discovery/${local.name}" = local.name + } + + # Add Karpenter IAM role to the aws-auth config map to allow the controller to register the ndoes to the clsuter + map_roles = [ + { + rolearn = aws_iam_role.karpenter.arn + username = "system:node:{{EC2PrivateDNSName}}" + groups = [ + "system:bootstrappers", + "system:nodes" + ] + } + ] + + # EKS FARGATE PROFILES + # We recommend to have Fargate profiles to place your critical workloads and add-ons + # Then rely on Karpenter to scale your workloads + # We filter the kube-system pods with labels since not all add-ons can run on Fargate (e.g. aws-node-termination-handler) + fargate_profiles = { + # Providing compute for the kube-system namespace where addons that can run on Fargate reside + kube_system = { + fargate_profile_name = "kube-system" + fargate_profile_namespaces = [{ + namespace = "kube-system" + }] + subnet_ids = module.vpc.private_subnets + }, + # Providing compute for the karpenter namespace + karpenter = { + fargate_profile_name = "karpenter" + fargate_profile_namespaces = [{ + namespace = "karpenter" + }] + subnet_ids = module.vpc.private_subnets + } + } + + tags = local.tags +} + +module "eks_blueprints_kubernetes_addons" { + depends_on = [module.eks_blueprints.fargate_profiles] + + source = "../../modules/kubernetes-addons" + + eks_cluster_id = module.eks_blueprints.eks_cluster_id + eks_cluster_endpoint = module.eks_blueprints.eks_cluster_endpoint + eks_oidc_provider = module.eks_blueprints.oidc_provider + eks_cluster_version = module.eks_blueprints.eks_cluster_version + + enable_amazon_eks_vpc_cni = true + amazon_eks_vpc_cni_config = { + most_recent = true + } + + enable_amazon_eks_kube_proxy = true + amazon_eks_kube_proxy_config = { + most_recent = true + } + + remove_default_coredns_deployment = true + enable_self_managed_coredns = true + self_managed_coredns_helm_config = { + # Sets the correct annotations to ensure the Fargate provisioner is used and not the Karpenter provisioner + compute_type = "fargate" + kubernetes_version = module.eks_blueprints.eks_cluster_version + } + enable_coredns_cluster_proportional_autoscaler = true + + karpenter_sqs_queue_arn = aws_sqs_queue.karpenter_interruption_queue.arn + enable_karpenter = true + + enable_aws_load_balancer_controller = true + aws_load_balancer_controller_helm_config = { + set_values = [ + { + name = "vpcId" + value = module.vpc.vpc_id + }, + { + name = "podDisruptionBudget.maxUnavailable" + value = 1 + } + ] + } + + tags = local.tags +} + +# Add the Karpenter Provisioners IAM Role +# https://karpenter.sh/v0.19.0/getting-started/getting-started-with-terraform/#create-the-karpentercontroller-iam-role +resource "aws_iam_role" "karpenter" { + name = "${local.name}-karpenter-role" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Sid = "" + Principal = { + Service = "ec2.amazonaws.com" + } + }, + ] + }) +} + +data "aws_iam_policy" "eks_cni" { + arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy" +} + +data "aws_iam_policy" "eks_worker_node" { + arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy" +} + +data "aws_iam_policy" "ecr_read_only" { + arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" +} + +data "aws_iam_policy" "instance_core" { + arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" +} + +resource "aws_iam_role_policy_attachment" "karpenter_eks_cni" { + role = aws_iam_role.karpenter.name + policy_arn = data.aws_iam_policy.eks_cni.arn +} + +resource "aws_iam_role_policy_attachment" "karpenter_eks_worker_node" { + role = aws_iam_role.karpenter.name + policy_arn = data.aws_iam_policy.eks_worker_node.arn +} + +resource "aws_iam_role_policy_attachment" "karpenter_ecr_read_only" { + role = aws_iam_role.karpenter.name + policy_arn = data.aws_iam_policy.ecr_read_only.arn +} + +resource "aws_iam_role_policy_attachment" "karpenter_instance_core" { + role = aws_iam_role.karpenter.name + policy_arn = data.aws_iam_policy.instance_core.arn +} + +resource "aws_iam_instance_profile" "karpenter" { + name = "${local.name}-karpenter-instance-profile" + role = aws_iam_role.karpenter.name +} + +# Add the default provisioner for Karpenter autoscaler +data "kubectl_path_documents" "karpenter_provisioners" { + pattern = "${path.module}/provisioners/default_provisioner*.yaml" + vars = { + azs = join(",", local.azs) + iam-instance-profile-id = "${local.name}-karpenter-instance-profile" + eks-cluster-id = local.name + eks-vpc_name = local.name + } +} + +resource "kubectl_manifest" "karpenter_provisioner" { + depends_on = [module.eks_blueprints_kubernetes_addons] + for_each = toset(data.kubectl_path_documents.karpenter_provisioners.documents) + yaml_body = each.value +} + +#--------------------------------------------------------------- +# Supporting Resources +#--------------------------------------------------------------- + +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + version = "~> 3.0" + + name = local.name + cidr = local.vpc_cidr + + azs = local.azs + public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)] + private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 10)] + + enable_nat_gateway = true + single_nat_gateway = true + enable_dns_hostnames = true + + # Manage so we can name + manage_default_network_acl = true + default_network_acl_tags = { Name = "${local.name}-default" } + manage_default_route_table = true + default_route_table_tags = { Name = "${local.name}-default" } + manage_default_security_group = true + default_security_group_tags = { Name = "${local.name}-default" } + + public_subnet_tags = { + "kubernetes.io/cluster/${local.name}" = "shared" + "kubernetes.io/role/elb" = 1 + } + + private_subnet_tags = { + "kubernetes.io/cluster/${local.name}" = "shared" + "kubernetes.io/role/internal-elb" = 1 + } + + tags = local.tags +} diff --git a/examples/karpenter-fargate/outputs.tf b/examples/karpenter-fargate/outputs.tf new file mode 100644 index 0000000000..55552d3138 --- /dev/null +++ b/examples/karpenter-fargate/outputs.tf @@ -0,0 +1,4 @@ +output "configure_kubectl" { + description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" + value = module.eks_blueprints.configure_kubectl +} diff --git a/examples/karpenter-fargate/provisioners/default_provisioner.yaml b/examples/karpenter-fargate/provisioners/default_provisioner.yaml new file mode 100644 index 0000000000..deeedd7d51 --- /dev/null +++ b/examples/karpenter-fargate/provisioners/default_provisioner.yaml @@ -0,0 +1,27 @@ +apiVersion: karpenter.sh/v1alpha5 +kind: Provisioner +metadata: + name: default +spec: + requirements: + - key: "topology.kubernetes.io/zone" + operator: In + values: [${azs}] + - key: "karpenter.sh/capacity-type" + operator: In + values: ["spot", "on-demand"] + limits: + resources: + cpu: 1000 + provider: + amiFamily: Bottlerocket + instanceProfile: ${iam-instance-profile-id} + subnetSelector: + Name: "${eks-vpc_name}-private*" + securityGroupSelector: + karpenter.sh/discovery/${eks-cluster-id}: '${eks-cluster-id}' + labels: + type: karpenter + provisioner: default + ttlSecondsAfterEmpty: 120 + ttlSecondsUntilExpired: 86400 # 1 day = 86400 ; 30 days = 2592000 diff --git a/examples/karpenter-fargate/provisioners/sample_deployment.yaml b/examples/karpenter-fargate/provisioners/sample_deployment.yaml new file mode 100644 index 0000000000..0008318743 --- /dev/null +++ b/examples/karpenter-fargate/provisioners/sample_deployment.yaml @@ -0,0 +1,71 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + namespace: default + name: deployment-2048 +spec: + selector: + matchLabels: + app.kubernetes.io/name: app-2048 + replicas: 5 + template: + metadata: + labels: + app.kubernetes.io/name: app-2048 + spec: + containers: + - image: public.ecr.aws/l6m2t8p7/docker-2048:latest + imagePullPolicy: Always + name: app-2048 + ports: + - containerPort: 80 + nodeSelector: + karpenter.sh/capacity-type: spot + karpenter.k8s.aws/instance-category: c + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: DoNotSchedule + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: + - app-2048 + topologyKey: kubernetes.io/hostname +--- +apiVersion: v1 +kind: Service +metadata: + namespace: default + name: service-2048 +spec: + ports: + - port: 80 + targetPort: 80 + selector: + app.kubernetes.io/name: app-2048 +--- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + namespace: default + name: ingress-2048 + annotations: + alb.ingress.kubernetes.io/scheme: internet-facing + alb.ingress.kubernetes.io/target-type: ip +spec: + ingressClassName: alb + rules: + - http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: service-2048 + port: + number: 80 diff --git a/examples/karpenter-fargate/variables.tf b/examples/karpenter-fargate/variables.tf new file mode 100644 index 0000000000..e69de29bb2 diff --git a/examples/karpenter-fargate/versions.tf b/examples/karpenter-fargate/versions.tf new file mode 100644 index 0000000000..1fcdcd4cd0 --- /dev/null +++ b/examples/karpenter-fargate/versions.tf @@ -0,0 +1,29 @@ +terraform { + required_version = ">= 1.0.0" + + required_providers { + aws = { + source = "hashicorp/aws" + version = ">= 3.72" + } + kubernetes = { + source = "hashicorp/kubernetes" + version = ">= 2.10" + } + helm = { + source = "hashicorp/helm" + version = ">= 2.4.1" + } + kubectl = { + source = "gavinbunney/kubectl" + version = ">= 1.14" + } + } + + # ## Used for end-to-end testing on project; update to suit your needs + # backend "s3" { + # bucket = "terraform-ssp-github-actions-state" + # region = "us-west-2" + # key = "e2e/karpenter/terraform.tfstate" + # } +}