Skip to content

Commit

Permalink
Docs: Update Getting started and docs clean-up (#1116)
Browse files Browse the repository at this point in the history
* Docs: Update Getting started and clean-up

* Include feedback and remove Fargate from changes

* Apply changes to v0.5.5

* Remove file to match main branch

* Align with the latest release docs

* Align with the latest release docs - missing tags
  • Loading branch information
mbevc1 authored Jan 24, 2022
1 parent d487f1c commit 5d7f689
Show file tree
Hide file tree
Showing 6 changed files with 66 additions and 59 deletions.
35 changes: 20 additions & 15 deletions website/content/en/preview/getting-started-with-terraform/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ export CLUSTER_NAME=$USER-karpenter-demo
export AWS_DEFAULT_REGION=us-west-2
```

The first thing we need to do is create our `main.tf` file and place the
following in it. This will let us pass in a cluster name that will be used
The first thing we need to do is create our `main.tf` file and place the
following in it. This will let us pass in a cluster name that will be used
throughout the remainder of our config.

```hcl
Expand All @@ -65,7 +65,7 @@ variable "cluster_name" {

We're going to use two different Terraform modules to create our cluster - one
to create the VPC and another for the cluster itself. The key part of this is
that we need to tag the VPC subnets that we want to use for the worker nodes.
that we need to tag the VPC subnets that we want to use for the worker nodes.

Place the following Terraform config into your `main.tf` file.

Expand Down Expand Up @@ -111,10 +111,9 @@ module "eks" {
"karpenter.sh/discovery" = var.cluster_name
}
}
```
At this point, go ahead and apply what we've done to create the VPC and
cluster. This may take some time.
EKS cluster. This may take some time.
```bash
terraform init
Expand All @@ -138,11 +137,11 @@ Everything should apply successfully now!

### Configure the KarpenterNode IAM Role

The EKS module creates an IAM role for worker nodes. We'll use that for
The EKS module creates an IAM role for worker nodes. We'll use that for
Karpenter (so we don't have to reconfigure the aws-auth ConfigMap), but we need
to add one more policy and create an instance profile.

Place the following into your `main.tf` to add the policy and create an
Place the following into your `main.tf` to add the policy and create an
instance profile.

```hcl
Expand All @@ -167,14 +166,14 @@ Go ahead and apply the changes.
terraform apply -var cluster_name=$CLUSTER_NAME
```

Now, Karpenter can use this instance profile to launch new EC2 instances and
Now, Karpenter can use this instance profile to launch new EC2 instances and
those instances will be able to connect to your cluster.

### Create the KarpenterController IAM Role

Karpenter requires permissions like launching instances, which means it needs
an IAM role that grants it access. The config below will create an AWS IAM
Role, attach a policy, and authorize the Service Account to assume the role
an IAM role that grants it access. The config below will create an AWS IAM
Role, attach a policy, and authorize the Service Account to assume the role
using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html).
We will create the ServiceAccount and connect it to this role during the Helm
chart install.
Expand Down Expand Up @@ -221,7 +220,7 @@ resource "aws_iam_role_policy" "karpenter_controller" {
}
```

Since we've added a new module, you'll need to run `terraform init` again.
Since we've added a new module, you'll need to run `terraform init` again.
Then, apply the changes.

```bash
Expand All @@ -231,7 +230,7 @@ terraform apply -var cluster_name=$CLUSTER_NAME

### Install Karpenter Helm Chart

Use helm to deploy Karpenter to the cluster. We are going to use the
Use helm to deploy Karpenter to the cluster. We are going to use the
`helm_release` Terraform resource to do the deploy and pass in the cluster
details and IAM role Karpenter needs to assume.

Expand Down Expand Up @@ -384,13 +383,19 @@ kubectl delete node $NODE_NAME

## Cleanup

To avoid additional charges, remove the demo infrastructure from your AWS
To avoid additional charges, remove the demo infrastructure from your AWS
account. Since Karpenter is managing nodes outside of Terraform's view, we need
to remove the pods and node first (if you haven't already). Once the node is
removed, you can remove the rest of the infrastructure.
to remove the pods and node first (if you haven't already). Once the node is
removed, you can remove the rest of the infrastructure and clean up Karpenter
created LaunchTemplates.

```bash
kubectl delete deployment inflate
kubectl delete node -l karpenter.sh/provisioner-name=default
helm uninstall karpenter --namespace karpenter
terraform destroy -var cluster_name=$CLUSTER_NAME
aws ec2 describe-launch-templates \
| jq -r ".LaunchTemplates[].LaunchTemplateName" \
| grep -i Karpenter-${CLUSTER_NAME} \
| xargs -I{} aws ec2 delete-launch-template --launch-template-name {}
```
4 changes: 0 additions & 4 deletions website/content/en/preview/getting-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,3 @@ aws ec2 describe-launch-templates \
| xargs -I{} aws ec2 delete-launch-template --launch-template-name {}
eksctl delete cluster --name ${CLUSTER_NAME}
```

---

## Next Steps:
12 changes: 6 additions & 6 deletions website/content/en/preview/provisioner.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,13 @@ spec:
# These requirements are combined with pod.spec.affinity.nodeAffinity rules.
# Operators { In, NotIn } are supported to enable including or excluding values
requirements:
- key: "node.kubernetes.io/instance-type"
- key: "node.kubernetes.io/instance-type"
operator: In
values: ["m5.large", "m5.2xlarge"]
- key: "topology.kubernetes.io/zone"
- key: "topology.kubernetes.io/zone"
operator: In
values: ["us-west-2a", "us-west-2b"]
- key: "kubernetes.io/arch"
- key: "kubernetes.io/arch"
operator: In
values: ["arm64", "amd64"]
- key: "karpenter.sh/capacity-type" # If not included, the webhook for the AWS cloud provider will default to on-demand
Expand All @@ -67,7 +67,7 @@ These well known labels may be specified at the provisioner level, or in a workl
For example, an instance type may be specified using a nodeSelector in a pod spec. If the instance type requested is not included in the provisioner list and the provisioner has instance type requirements, Karpenter will not create a node or schedule the pod.
📝 None of these values are required.
📝 None of these values are required.
### Instance Types
Expand Down Expand Up @@ -140,14 +140,14 @@ Karpenter supports `amd64` nodes, and `arm64` nodes.

- values
- `spot` (default)
- `on-demand`
- `on-demand`

Karpenter supports specifying capacity type, which is analogous to [EC2 purchase options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html).


## spec.kubeletConfiguration

Karpenter provides the ability to specify a few additional Kubelet args. These are all optional and provide support for
Karpenter provides the ability to specify a few additional Kubelet args. These are all optional and provide support for
additional customization and use cases. Adjust these only if you know you need to do so.

```yaml
Expand Down
35 changes: 20 additions & 15 deletions website/content/en/v0.5.3/getting-started-with-terraform/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ export CLUSTER_NAME=$USER-karpenter-demo
export AWS_DEFAULT_REGION=us-west-2
```

The first thing we need to do is create our `main.tf` file and place the
following in it. This will let us pass in a cluster name that will be used
The first thing we need to do is create our `main.tf` file and place the
following in it. This will let us pass in a cluster name that will be used
throughout the remainder of our config.

```hcl
Expand All @@ -65,7 +65,7 @@ variable "cluster_name" {

We're going to use two different Terraform modules to create our cluster - one
to create the VPC and another for the cluster itself. The key part of this is
that we need to tag the VPC subnets that we want to use for the worker nodes.
that we need to tag the VPC subnets that we want to use for the worker nodes.

Place the following Terraform config into your `main.tf` file.

Expand Down Expand Up @@ -107,10 +107,9 @@ module "eks" {
}
]
}
```
At this point, go ahead and apply what we've done to create the VPC and
cluster. This may take some time.
EKS cluster. This may take some time.
```bash
terraform init
Expand All @@ -134,11 +133,11 @@ Everything should apply successfully now!

### Configure the KarpenterNode IAM Role

The EKS module creates an IAM role for worker nodes. We'll use that for
The EKS module creates an IAM role for worker nodes. We'll use that for
Karpenter (so we don't have to reconfigure the aws-auth ConfigMap), but we need
to add one more policy and create an instance profile.

Place the following into your `main.tf` to add the policy and create an
Place the following into your `main.tf` to add the policy and create an
instance profile.

```hcl
Expand All @@ -163,14 +162,14 @@ Go ahead and apply the changes.
terraform apply -var cluster_name=$CLUSTER_NAME
```

Now, Karpenter can use this instance profile to launch new EC2 instances and
Now, Karpenter can use this instance profile to launch new EC2 instances and
those instances will be able to connect to your cluster.

### Create the KarpenterController IAM Role

Karpenter requires permissions like launching instances, which means it needs
an IAM role that grants it access. The config below will create an AWS IAM
Role, attach a policy, and authorize the Service Account to assume the role
an IAM role that grants it access. The config below will create an AWS IAM
Role, attach a policy, and authorize the Service Account to assume the role
using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html).
We will create the ServiceAccount and connect it to this role during the Helm
chart install.
Expand Down Expand Up @@ -217,7 +216,7 @@ resource "aws_iam_role_policy" "karpenter_controller" {
}
```

Since we've added a new module, you'll need to run `terraform init` again.
Since we've added a new module, you'll need to run `terraform init` again.
Then, apply the changes.

```bash
Expand All @@ -227,7 +226,7 @@ terraform apply -var cluster_name=$CLUSTER_NAME

### Install Karpenter Helm Chart

Use helm to deploy Karpenter to the cluster. We are going to use the
Use helm to deploy Karpenter to the cluster. We are going to use the
`helm_release` Terraform resource to do the deploy and pass in the cluster
details and IAM role Karpenter needs to assume.

Expand Down Expand Up @@ -372,13 +371,19 @@ kubectl delete node $NODE_NAME

## Cleanup

To avoid additional charges, remove the demo infrastructure from your AWS
To avoid additional charges, remove the demo infrastructure from your AWS
account. Since Karpenter is managing nodes outside of Terraform's view, we need
to remove the pods and node first (if you haven't already). Once the node is
removed, you can remove the rest of the infrastructure.
to remove the pods and node first (if you haven't already). Once the node is
removed, you can remove the rest of the infrastructure and clean up Karpenter
created LaunchTemplates.

```bash
kubectl delete deployment inflate
kubectl delete node -l karpenter.sh/provisioner-name=default
helm uninstall karpenter --namespace karpenter
terraform destroy -var cluster_name=$CLUSTER_NAME
aws ec2 describe-launch-templates \
| jq -r ".LaunchTemplates[].LaunchTemplateName" \
| grep -i Karpenter-${CLUSTER_NAME} \
| xargs -I{} aws ec2 delete-launch-template --launch-template-name {}
```
35 changes: 20 additions & 15 deletions website/content/en/v0.5.5/getting-started-with-terraform/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ export CLUSTER_NAME=$USER-karpenter-demo
export AWS_DEFAULT_REGION=us-west-2
```

The first thing we need to do is create our `main.tf` file and place the
following in it. This will let us pass in a cluster name that will be used
The first thing we need to do is create our `main.tf` file and place the
following in it. This will let us pass in a cluster name that will be used
throughout the remainder of our config.

```hcl
Expand All @@ -65,7 +65,7 @@ variable "cluster_name" {

We're going to use two different Terraform modules to create our cluster - one
to create the VPC and another for the cluster itself. The key part of this is
that we need to tag the VPC subnets that we want to use for the worker nodes.
that we need to tag the VPC subnets that we want to use for the worker nodes.

Place the following Terraform config into your `main.tf` file.

Expand Down Expand Up @@ -111,10 +111,9 @@ module "eks" {
"karpenter.sh/discovery" = var.cluster_name
}
}
```
At this point, go ahead and apply what we've done to create the VPC and
cluster. This may take some time.
EKS cluster. This may take some time.
```bash
terraform init
Expand All @@ -138,11 +137,11 @@ Everything should apply successfully now!

### Configure the KarpenterNode IAM Role

The EKS module creates an IAM role for worker nodes. We'll use that for
The EKS module creates an IAM role for worker nodes. We'll use that for
Karpenter (so we don't have to reconfigure the aws-auth ConfigMap), but we need
to add one more policy and create an instance profile.

Place the following into your `main.tf` to add the policy and create an
Place the following into your `main.tf` to add the policy and create an
instance profile.

```hcl
Expand All @@ -167,14 +166,14 @@ Go ahead and apply the changes.
terraform apply -var cluster_name=$CLUSTER_NAME
```

Now, Karpenter can use this instance profile to launch new EC2 instances and
Now, Karpenter can use this instance profile to launch new EC2 instances and
those instances will be able to connect to your cluster.

### Create the KarpenterController IAM Role

Karpenter requires permissions like launching instances, which means it needs
an IAM role that grants it access. The config below will create an AWS IAM
Role, attach a policy, and authorize the Service Account to assume the role
an IAM role that grants it access. The config below will create an AWS IAM
Role, attach a policy, and authorize the Service Account to assume the role
using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html).
We will create the ServiceAccount and connect it to this role during the Helm
chart install.
Expand Down Expand Up @@ -221,7 +220,7 @@ resource "aws_iam_role_policy" "karpenter_controller" {
}
```

Since we've added a new module, you'll need to run `terraform init` again.
Since we've added a new module, you'll need to run `terraform init` again.
Then, apply the changes.

```bash
Expand All @@ -231,7 +230,7 @@ terraform apply -var cluster_name=$CLUSTER_NAME

### Install Karpenter Helm Chart

Use helm to deploy Karpenter to the cluster. We are going to use the
Use helm to deploy Karpenter to the cluster. We are going to use the
`helm_release` Terraform resource to do the deploy and pass in the cluster
details and IAM role Karpenter needs to assume.

Expand Down Expand Up @@ -380,13 +379,19 @@ kubectl delete node $NODE_NAME

## Cleanup

To avoid additional charges, remove the demo infrastructure from your AWS
To avoid additional charges, remove the demo infrastructure from your AWS
account. Since Karpenter is managing nodes outside of Terraform's view, we need
to remove the pods and node first (if you haven't already). Once the node is
removed, you can remove the rest of the infrastructure.
to remove the pods and node first (if you haven't already). Once the node is
removed, you can remove the rest of the infrastructure and clean up Karpenter
created LaunchTemplates.

```bash
kubectl delete deployment inflate
kubectl delete node -l karpenter.sh/provisioner-name=default
helm uninstall karpenter --namespace karpenter
terraform destroy -var cluster_name=$CLUSTER_NAME
aws ec2 describe-launch-templates \
| jq -r ".LaunchTemplates[].LaunchTemplateName" \
| grep -i Karpenter-${CLUSTER_NAME} \
| xargs -I{} aws ec2 delete-launch-template --launch-template-name {}
```
4 changes: 0 additions & 4 deletions website/content/en/v0.5.5/getting-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,3 @@ aws ec2 describe-launch-templates \
| xargs -I{} aws ec2 delete-launch-template --launch-template-name {}
eksctl delete cluster --name ${CLUSTER_NAME}
```

---

## Next Steps:

0 comments on commit 5d7f689

Please sign in to comment.