Skip to content

Commit

Permalink
Apply changes to v0.5.5
Browse files Browse the repository at this point in the history
  • Loading branch information
mbevc1 committed Jan 21, 2022
1 parent c49b05d commit 3931c8e
Showing 1 changed file with 20 additions and 23 deletions.
43 changes: 20 additions & 23 deletions website/content/en/v0.5.5/getting-started-with-terraform/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ export CLUSTER_NAME=$USER-karpenter-demo
export AWS_DEFAULT_REGION=us-west-2
```

The first thing we need to do is create our `main.tf` file and place the
following in it. This will let us pass in a cluster name that will be used
The first thing we need to do is create our `main.tf` file and place the
following in it. This will let us pass in a cluster name that will be used
throughout the remainder of our config.

```hcl
Expand All @@ -65,7 +65,7 @@ variable "cluster_name" {

We're going to use two different Terraform modules to create our cluster - one
to create the VPC and another for the cluster itself. The key part of this is
that we need to tag the VPC subnets that we want to use for the worker nodes.
that we need to tag the VPC subnets that we want to use for the worker nodes.

Place the following Terraform config into your `main.tf` file.

Expand Down Expand Up @@ -106,15 +106,10 @@ module "eks" {
asg_max_size = 1
}
]
tags = {
"karpenter.sh/discovery" = var.cluster_name
}
}
```
At this point, go ahead and apply what we've done to create the VPC and
cluster. This may take some time.
EKS cluster. This may take some time.
```bash
terraform init
Expand All @@ -138,11 +133,11 @@ Everything should apply successfully now!

### Configure the KarpenterNode IAM Role

The EKS module creates an IAM role for worker nodes. We'll use that for
The EKS module creates an IAM role for worker nodes. We'll use that for
Karpenter (so we don't have to reconfigure the aws-auth ConfigMap), but we need
to add one more policy and create an instance profile.

Place the following into your `main.tf` to add the policy and create an
Place the following into your `main.tf` to add the policy and create an
instance profile.

```hcl
Expand All @@ -167,14 +162,14 @@ Go ahead and apply the changes.
terraform apply -var cluster_name=$CLUSTER_NAME
```

Now, Karpenter can use this instance profile to launch new EC2 instances and
Now, Karpenter can use this instance profile to launch new EC2 instances and
those instances will be able to connect to your cluster.

### Create the KarpenterController IAM Role

Karpenter requires permissions like launching instances, which means it needs
an IAM role that grants it access. The config below will create an AWS IAM
Role, attach a policy, and authorize the Service Account to assume the role
an IAM role that grants it access. The config below will create an AWS IAM
Role, attach a policy, and authorize the Service Account to assume the role
using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html).
We will create the ServiceAccount and connect it to this role during the Helm
chart install.
Expand Down Expand Up @@ -221,7 +216,7 @@ resource "aws_iam_role_policy" "karpenter_controller" {
}
```

Since we've added a new module, you'll need to run `terraform init` again.
Since we've added a new module, you'll need to run `terraform init` again.
Then, apply the changes.

```bash
Expand All @@ -231,7 +226,7 @@ terraform apply -var cluster_name=$CLUSTER_NAME

### Install Karpenter Helm Chart

Use helm to deploy Karpenter to the cluster. We are going to use the
Use helm to deploy Karpenter to the cluster. We are going to use the
`helm_release` Terraform resource to do the deploy and pass in the cluster
details and IAM role Karpenter needs to assume.

Expand Down Expand Up @@ -311,10 +306,6 @@ spec:
cpu: 1000
provider:
instanceProfile: KarpenterNodeInstanceProfile-${CLUSTER_NAME}
subnetSelector:
karpenter.sh/discovery: ${CLUSTER_NAME}
securityGroupSelector:
karpenter.sh/discovery: ${CLUSTER_NAME}
ttlSecondsAfterEmpty: 30
EOF
```
Expand Down Expand Up @@ -380,13 +371,19 @@ kubectl delete node $NODE_NAME

## Cleanup

To avoid additional charges, remove the demo infrastructure from your AWS
To avoid additional charges, remove the demo infrastructure from your AWS
account. Since Karpenter is managing nodes outside of Terraform's view, we need
to remove the pods and node first (if you haven't already). Once the node is
removed, you can remove the rest of the infrastructure.
to remove the pods and node first (if you haven't already). Once the node is
removed, you can remove the rest of the infrastructure and clean up Karpenter
created LaunchTemplates.

```bash
kubectl delete deployment inflate
kubectl delete node -l karpenter.sh/provisioner-name=default
helm uninstall karpenter --namespace karpenter
terraform destroy -var cluster_name=$CLUSTER_NAME
aws ec2 describe-launch-templates \
| jq -r ".LaunchTemplates[].LaunchTemplateName" \
| grep -i Karpenter-${CLUSTER_NAME} \
| xargs -I{} aws ec2 delete-launch-template --launch-template-name {}
```

0 comments on commit 3931c8e

Please sign in to comment.