diff --git a/website/content/en/preview/getting-started-with-terraform/_index.md b/website/content/en/preview/getting-started-with-terraform/_index.md index 77d3cf73c7e5..ca93f9adf24b 100644 --- a/website/content/en/preview/getting-started-with-terraform/_index.md +++ b/website/content/en/preview/getting-started-with-terraform/_index.md @@ -49,8 +49,8 @@ export CLUSTER_NAME=$USER-karpenter-demo export AWS_DEFAULT_REGION=us-west-2 ``` -The first thing we need to do is create our `main.tf` file and place the -following in it. This will let us pass in a cluster name that will be used +The first thing we need to do is create our `main.tf` file and place the +following in it. This will let us pass in a cluster name that will be used throughout the remainder of our config. ```hcl @@ -65,7 +65,7 @@ variable "cluster_name" { We're going to use two different Terraform modules to create our cluster - one to create the VPC and another for the cluster itself. The key part of this is -that we need to tag the VPC subnets that we want to use for the worker nodes. +that we need to tag the VPC subnets that we want to use for the worker nodes. Place the following Terraform config into your `main.tf` file. @@ -111,10 +111,9 @@ module "eks" { "karpenter.sh/discovery" = var.cluster_name } } -``` At this point, go ahead and apply what we've done to create the VPC and -cluster. This may take some time. +EKS cluster. This may take some time. ```bash terraform init @@ -138,11 +137,11 @@ Everything should apply successfully now! ### Configure the KarpenterNode IAM Role -The EKS module creates an IAM role for worker nodes. We'll use that for +The EKS module creates an IAM role for worker nodes. We'll use that for Karpenter (so we don't have to reconfigure the aws-auth ConfigMap), but we need to add one more policy and create an instance profile. -Place the following into your `main.tf` to add the policy and create an +Place the following into your `main.tf` to add the policy and create an instance profile. ```hcl @@ -167,14 +166,14 @@ Go ahead and apply the changes. terraform apply -var cluster_name=$CLUSTER_NAME ``` -Now, Karpenter can use this instance profile to launch new EC2 instances and +Now, Karpenter can use this instance profile to launch new EC2 instances and those instances will be able to connect to your cluster. ### Create the KarpenterController IAM Role Karpenter requires permissions like launching instances, which means it needs -an IAM role that grants it access. The config below will create an AWS IAM -Role, attach a policy, and authorize the Service Account to assume the role +an IAM role that grants it access. The config below will create an AWS IAM +Role, attach a policy, and authorize the Service Account to assume the role using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html). We will create the ServiceAccount and connect it to this role during the Helm chart install. @@ -221,7 +220,7 @@ resource "aws_iam_role_policy" "karpenter_controller" { } ``` -Since we've added a new module, you'll need to run `terraform init` again. +Since we've added a new module, you'll need to run `terraform init` again. Then, apply the changes. ```bash @@ -231,7 +230,7 @@ terraform apply -var cluster_name=$CLUSTER_NAME ### Install Karpenter Helm Chart -Use helm to deploy Karpenter to the cluster. We are going to use the +Use helm to deploy Karpenter to the cluster. We are going to use the `helm_release` Terraform resource to do the deploy and pass in the cluster details and IAM role Karpenter needs to assume. @@ -384,13 +383,19 @@ kubectl delete node $NODE_NAME ## Cleanup -To avoid additional charges, remove the demo infrastructure from your AWS +To avoid additional charges, remove the demo infrastructure from your AWS account. Since Karpenter is managing nodes outside of Terraform's view, we need -to remove the pods and node first (if you haven't already). Once the node is -removed, you can remove the rest of the infrastructure. +to remove the pods and node first (if you haven't already). Once the node is +removed, you can remove the rest of the infrastructure and clean up Karpenter +created LaunchTemplates. ```bash kubectl delete deployment inflate kubectl delete node -l karpenter.sh/provisioner-name=default +helm uninstall karpenter --namespace karpenter terraform destroy -var cluster_name=$CLUSTER_NAME +aws ec2 describe-launch-templates \ + | jq -r ".LaunchTemplates[].LaunchTemplateName" \ + | grep -i Karpenter-${CLUSTER_NAME} \ + | xargs -I{} aws ec2 delete-launch-template --launch-template-name {} ``` diff --git a/website/content/en/preview/getting-started/_index.md b/website/content/en/preview/getting-started/_index.md index 2afecf18d0f1..83be09c42608 100644 --- a/website/content/en/preview/getting-started/_index.md +++ b/website/content/en/preview/getting-started/_index.md @@ -304,7 +304,3 @@ aws ec2 describe-launch-templates \ | xargs -I{} aws ec2 delete-launch-template --launch-template-name {} eksctl delete cluster --name ${CLUSTER_NAME} ``` - ---- - -## Next Steps: diff --git a/website/content/en/preview/provisioner.md b/website/content/en/preview/provisioner.md index c8ecdc194c29..b015bd96c65a 100644 --- a/website/content/en/preview/provisioner.md +++ b/website/content/en/preview/provisioner.md @@ -35,13 +35,13 @@ spec: # These requirements are combined with pod.spec.affinity.nodeAffinity rules. # Operators { In, NotIn } are supported to enable including or excluding values requirements: - - key: "node.kubernetes.io/instance-type" + - key: "node.kubernetes.io/instance-type" operator: In values: ["m5.large", "m5.2xlarge"] - - key: "topology.kubernetes.io/zone" + - key: "topology.kubernetes.io/zone" operator: In values: ["us-west-2a", "us-west-2b"] - - key: "kubernetes.io/arch" + - key: "kubernetes.io/arch" operator: In values: ["arm64", "amd64"] - key: "karpenter.sh/capacity-type" # If not included, the webhook for the AWS cloud provider will default to on-demand @@ -67,7 +67,7 @@ These well known labels may be specified at the provisioner level, or in a workl For example, an instance type may be specified using a nodeSelector in a pod spec. If the instance type requested is not included in the provisioner list and the provisioner has instance type requirements, Karpenter will not create a node or schedule the pod. -📝 None of these values are required. +📝 None of these values are required. ### Instance Types @@ -140,14 +140,14 @@ Karpenter supports `amd64` nodes, and `arm64` nodes. - values - `spot` (default) - - `on-demand` + - `on-demand` Karpenter supports specifying capacity type, which is analogous to [EC2 purchase options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html). ## spec.kubeletConfiguration -Karpenter provides the ability to specify a few additional Kubelet args. These are all optional and provide support for +Karpenter provides the ability to specify a few additional Kubelet args. These are all optional and provide support for additional customization and use cases. Adjust these only if you know you need to do so. ```yaml diff --git a/website/content/en/v0.5.3/getting-started-with-terraform/_index.md b/website/content/en/v0.5.3/getting-started-with-terraform/_index.md index c2c69e791aa4..409c6096f467 100644 --- a/website/content/en/v0.5.3/getting-started-with-terraform/_index.md +++ b/website/content/en/v0.5.3/getting-started-with-terraform/_index.md @@ -49,8 +49,8 @@ export CLUSTER_NAME=$USER-karpenter-demo export AWS_DEFAULT_REGION=us-west-2 ``` -The first thing we need to do is create our `main.tf` file and place the -following in it. This will let us pass in a cluster name that will be used +The first thing we need to do is create our `main.tf` file and place the +following in it. This will let us pass in a cluster name that will be used throughout the remainder of our config. ```hcl @@ -65,7 +65,7 @@ variable "cluster_name" { We're going to use two different Terraform modules to create our cluster - one to create the VPC and another for the cluster itself. The key part of this is -that we need to tag the VPC subnets that we want to use for the worker nodes. +that we need to tag the VPC subnets that we want to use for the worker nodes. Place the following Terraform config into your `main.tf` file. @@ -107,10 +107,9 @@ module "eks" { } ] } -``` At this point, go ahead and apply what we've done to create the VPC and -cluster. This may take some time. +EKS cluster. This may take some time. ```bash terraform init @@ -134,11 +133,11 @@ Everything should apply successfully now! ### Configure the KarpenterNode IAM Role -The EKS module creates an IAM role for worker nodes. We'll use that for +The EKS module creates an IAM role for worker nodes. We'll use that for Karpenter (so we don't have to reconfigure the aws-auth ConfigMap), but we need to add one more policy and create an instance profile. -Place the following into your `main.tf` to add the policy and create an +Place the following into your `main.tf` to add the policy and create an instance profile. ```hcl @@ -163,14 +162,14 @@ Go ahead and apply the changes. terraform apply -var cluster_name=$CLUSTER_NAME ``` -Now, Karpenter can use this instance profile to launch new EC2 instances and +Now, Karpenter can use this instance profile to launch new EC2 instances and those instances will be able to connect to your cluster. ### Create the KarpenterController IAM Role Karpenter requires permissions like launching instances, which means it needs -an IAM role that grants it access. The config below will create an AWS IAM -Role, attach a policy, and authorize the Service Account to assume the role +an IAM role that grants it access. The config below will create an AWS IAM +Role, attach a policy, and authorize the Service Account to assume the role using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html). We will create the ServiceAccount and connect it to this role during the Helm chart install. @@ -217,7 +216,7 @@ resource "aws_iam_role_policy" "karpenter_controller" { } ``` -Since we've added a new module, you'll need to run `terraform init` again. +Since we've added a new module, you'll need to run `terraform init` again. Then, apply the changes. ```bash @@ -227,7 +226,7 @@ terraform apply -var cluster_name=$CLUSTER_NAME ### Install Karpenter Helm Chart -Use helm to deploy Karpenter to the cluster. We are going to use the +Use helm to deploy Karpenter to the cluster. We are going to use the `helm_release` Terraform resource to do the deploy and pass in the cluster details and IAM role Karpenter needs to assume. @@ -372,13 +371,19 @@ kubectl delete node $NODE_NAME ## Cleanup -To avoid additional charges, remove the demo infrastructure from your AWS +To avoid additional charges, remove the demo infrastructure from your AWS account. Since Karpenter is managing nodes outside of Terraform's view, we need -to remove the pods and node first (if you haven't already). Once the node is -removed, you can remove the rest of the infrastructure. +to remove the pods and node first (if you haven't already). Once the node is +removed, you can remove the rest of the infrastructure and clean up Karpenter +created LaunchTemplates. ```bash kubectl delete deployment inflate kubectl delete node -l karpenter.sh/provisioner-name=default +helm uninstall karpenter --namespace karpenter terraform destroy -var cluster_name=$CLUSTER_NAME +aws ec2 describe-launch-templates \ + | jq -r ".LaunchTemplates[].LaunchTemplateName" \ + | grep -i Karpenter-${CLUSTER_NAME} \ + | xargs -I{} aws ec2 delete-launch-template --launch-template-name {} ``` diff --git a/website/content/en/v0.5.5/getting-started-with-terraform/_index.md b/website/content/en/v0.5.5/getting-started-with-terraform/_index.md index e3a2f9e7ea8f..ef178e1857f3 100644 --- a/website/content/en/v0.5.5/getting-started-with-terraform/_index.md +++ b/website/content/en/v0.5.5/getting-started-with-terraform/_index.md @@ -49,8 +49,8 @@ export CLUSTER_NAME=$USER-karpenter-demo export AWS_DEFAULT_REGION=us-west-2 ``` -The first thing we need to do is create our `main.tf` file and place the -following in it. This will let us pass in a cluster name that will be used +The first thing we need to do is create our `main.tf` file and place the +following in it. This will let us pass in a cluster name that will be used throughout the remainder of our config. ```hcl @@ -65,7 +65,7 @@ variable "cluster_name" { We're going to use two different Terraform modules to create our cluster - one to create the VPC and another for the cluster itself. The key part of this is -that we need to tag the VPC subnets that we want to use for the worker nodes. +that we need to tag the VPC subnets that we want to use for the worker nodes. Place the following Terraform config into your `main.tf` file. @@ -111,10 +111,9 @@ module "eks" { "karpenter.sh/discovery" = var.cluster_name } } -``` At this point, go ahead and apply what we've done to create the VPC and -cluster. This may take some time. +EKS cluster. This may take some time. ```bash terraform init @@ -138,11 +137,11 @@ Everything should apply successfully now! ### Configure the KarpenterNode IAM Role -The EKS module creates an IAM role for worker nodes. We'll use that for +The EKS module creates an IAM role for worker nodes. We'll use that for Karpenter (so we don't have to reconfigure the aws-auth ConfigMap), but we need to add one more policy and create an instance profile. -Place the following into your `main.tf` to add the policy and create an +Place the following into your `main.tf` to add the policy and create an instance profile. ```hcl @@ -167,14 +166,14 @@ Go ahead and apply the changes. terraform apply -var cluster_name=$CLUSTER_NAME ``` -Now, Karpenter can use this instance profile to launch new EC2 instances and +Now, Karpenter can use this instance profile to launch new EC2 instances and those instances will be able to connect to your cluster. ### Create the KarpenterController IAM Role Karpenter requires permissions like launching instances, which means it needs -an IAM role that grants it access. The config below will create an AWS IAM -Role, attach a policy, and authorize the Service Account to assume the role +an IAM role that grants it access. The config below will create an AWS IAM +Role, attach a policy, and authorize the Service Account to assume the role using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html). We will create the ServiceAccount and connect it to this role during the Helm chart install. @@ -221,7 +220,7 @@ resource "aws_iam_role_policy" "karpenter_controller" { } ``` -Since we've added a new module, you'll need to run `terraform init` again. +Since we've added a new module, you'll need to run `terraform init` again. Then, apply the changes. ```bash @@ -231,7 +230,7 @@ terraform apply -var cluster_name=$CLUSTER_NAME ### Install Karpenter Helm Chart -Use helm to deploy Karpenter to the cluster. We are going to use the +Use helm to deploy Karpenter to the cluster. We are going to use the `helm_release` Terraform resource to do the deploy and pass in the cluster details and IAM role Karpenter needs to assume. @@ -380,13 +379,19 @@ kubectl delete node $NODE_NAME ## Cleanup -To avoid additional charges, remove the demo infrastructure from your AWS +To avoid additional charges, remove the demo infrastructure from your AWS account. Since Karpenter is managing nodes outside of Terraform's view, we need -to remove the pods and node first (if you haven't already). Once the node is -removed, you can remove the rest of the infrastructure. +to remove the pods and node first (if you haven't already). Once the node is +removed, you can remove the rest of the infrastructure and clean up Karpenter +created LaunchTemplates. ```bash kubectl delete deployment inflate kubectl delete node -l karpenter.sh/provisioner-name=default +helm uninstall karpenter --namespace karpenter terraform destroy -var cluster_name=$CLUSTER_NAME +aws ec2 describe-launch-templates \ + | jq -r ".LaunchTemplates[].LaunchTemplateName" \ + | grep -i Karpenter-${CLUSTER_NAME} \ + | xargs -I{} aws ec2 delete-launch-template --launch-template-name {} ``` diff --git a/website/content/en/v0.5.5/getting-started/_index.md b/website/content/en/v0.5.5/getting-started/_index.md index 564e9258684f..18fccd2c438e 100644 --- a/website/content/en/v0.5.5/getting-started/_index.md +++ b/website/content/en/v0.5.5/getting-started/_index.md @@ -304,7 +304,3 @@ aws ec2 describe-launch-templates \ | xargs -I{} aws ec2 delete-launch-template --launch-template-name {} eksctl delete cluster --name ${CLUSTER_NAME} ``` - ---- - -## Next Steps: