Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add getting started based on fargate #488

Merged
merged 4 commits into from
Jun 30, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
99 changes: 56 additions & 43 deletions docs/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,31 +3,32 @@
This guide will provide a complete Karpenter installation for AWS.
These steps are opinionated and may need to be adapted for your use case.

> This guide should take less than 1 hour to complete and cost less than $.25
JacobGabrielson marked this conversation as resolved.
Show resolved Hide resolved

## Environment
```bash
CLOUD_PROVIDER=aws
rothgar marked this conversation as resolved.
Show resolved Hide resolved
export CLUSTER_NAME=$USER-karpenter-demo
export AWS_DEFAULT_REGION=us-west-2
rothgar marked this conversation as resolved.
Show resolved Hide resolved
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
CLUSTER_NAME=$USER-karpenter-demo
AWS_DEFAULT_REGION=us-west-2
KARPENTER_VERSION=$(curl -fsSL \
https://api.github.com/repos/awslabs/karpenter/releases/latest \
| jq -r '.tag_name')
```

### Create a Cluster

Create an EKS cluster
Karpenter can run anywhere, including on self-managed node groups, [managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html), or [AWS Fargate](https://aws.amazon.com/fargate/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need another line here, "including other cloud providers"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should that be in the AWS guide? I would assume when we get support for other providers we'll have different guides for those environments.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @rothgar has the right idea here. I think this will become effectively a non-issue when #484 lands

This demo will run Karpenter on Fargate, which means all EC2 instances added to this cluster will be controlled by Karpenter.

```bash
eksctl create cluster \
--name ${CLUSTER_NAME} \
--node-type m5.large \
--nodes 1 \
--nodes-min 1 \
--nodes-max 10 \
--managed \
--with-oidc
curl -fsSL https://raw.githubusercontent.com/awslabs/karpenter/"${KARPENTER_VERSION}"/docs/aws/eks-config.yaml \
| envsubst \
| eksctl create cluster -f -
```

Tag the cluster subnets with the required tags for Karpenter auto discovery.

Note: If you have a cluster with version 1.18 or below you can skip this step.
> If you are using a cluster with version 1.18 or below you can skip this step.
More [detailed here](https://github.com/awslabs/karpenter/issues/404#issuecomment-845283904).

```bash
Expand All @@ -37,54 +38,47 @@ SUBNET_IDS=$(aws cloudformation describe-stacks \
--output text)

aws ec2 create-tags \
--resources $(echo $SUBNET_IDS | tr ',' '\n') \
--resources $(echo ${SUBNET_IDS//,/ }) \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

--tags Key="kubernetes.io/cluster/${CLUSTER_NAME}",Value=
```

### Setup IRSA, Karpenter Controller Role, and Karpenter Node Role
We recommend using [CloudFormation](https://aws.amazon.com/cloudformation/) and [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) (IRSA) to manage these permissions.
For production use, please review and restrict these permissions for your use case.

Note: For IRSA to work your [cluster needs an OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html)
> For IRSA to work your cluster needs an [OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html)

```bash
OIDC_PROVIDER=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} \
--query 'cluster.identity.oidc.issuer' \
--output text)

# Creates IAM resources used by Karpenter
LATEST_KARPENTER_VERSION=$(curl \
https://api.github.com/repos/awslabs/karpenter/releases/latest | jq -r '.tag_name')
TEMPOUT=$(mktemp)
curl -fsSL https://raw.githubusercontent.com/awslabs/karpenter/"${LATEST_KARPENTER_VERSION}"/docs/aws/karpenter.cloudformation.yaml > $TEMPOUT \
curl -fsSL https://raw.githubusercontent.com/awslabs/karpenter/"${KARPENTER_VERSION}"/docs/aws/karpenter.cloudformation.yaml > $TEMPOUT \
&& aws cloudformation deploy \
--stack-name Karpenter-${CLUSTER_NAME} \
--template-file ${TEMPOUT} \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides ClusterName=${CLUSTER_NAME} OpenIDConnectIdentityProvider=${OIDC_PROVIDER/https:\/\//}

# Adds the karpenter node role to your aws-auth configmap, allowing nodes with this role to connect to the cluster.
kubectl patch configmap aws-auth -n kube-system --patch "$(cat <<-EOM
data:
mapRoles: |
- rolearn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
$(kubectl get configmap -n kube-system aws-auth -ojsonpath='{.data.mapRoles}' | sed 's/^/ /')
EOM
)"
--parameter-overrides ClusterName=${CLUSTER_NAME}

# Add the karpenter node role to your aws-auth configmap, allowing nodes with this role to connect to the cluster.
eksctl create iamidentitymapping \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent!

--username system:node:{{EC2PrivateDNSName}} \
--cluster ${CLUSTER_NAME} \
--arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME} \
--group system:bootstrappers \
--group system:nodes
```

### Install Karpenter

Use [`helm`](https://helm.sh/) to deploy Karpenter to the cluster.
For additional values, see [the helm chart values](https://github.com/awslabs/karpenter/blob/main/charts/karpenter/values.yaml)

> We created a Kubernetes service account with our cluster so we don't need the helm chart to do that.

```bash
helm repo add karpenter https://awslabs.github.io/karpenter/charts
helm repo update
# For additional values, see https://github.com/awslabs/karpenter/blob/main/charts/karpenter/values.yaml
helm upgrade --install karpenter karpenter/karpenter --create-namespace --namespace karpenter \
--set serviceAccount.annotations.'eks\.amazonaws\.com/role-arn'=arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterControllerRole-${CLUSTER_NAME}
njtran marked this conversation as resolved.
Show resolved Hide resolved
helm upgrade --install karpenter karpenter/karpenter \
--namespace karpenter --set serviceAccount.create=false
```

### (Optional) Enable Verbose Logging
Expand All @@ -103,17 +97,18 @@ kind: Provisioner
metadata:
name: default
spec:
ttlSeconds: 30
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this just to show how people can configure it and for the cleanup section later to help people avoid costs.

cluster:
name: ${CLUSTER_NAME}
caBundle: $(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.certificateAuthority.data" --output json)
endpoint: $(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.endpoint" --output json)
EOF
kubectl get provisioner default -oyaml
kubectl get provisioner default -o yaml
```

### Create some pods
Create some dummy pods and observe logs.
> Note: this will cause EC2 Instances to launch, which will be billed to your AWS Account.

```bash
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
Expand Down Expand Up @@ -141,13 +136,31 @@ kubectl scale deployment inflate --replicas 5
kubectl logs -f -n karpenter $(kubectl get pods -n karpenter -l karpenter=controller -o name)
```

You can see what EC2 instance type was added to your cluster from Karpenter with
```bash
kubectl get no -L "node.kubernetes.io/instance-type"
```

If you scale down the deployment replicas the instance will be terminated after 30 seconds (ttlSeconds).
```bash
kubectl scale deployment inflate --replicas 0
```

Or you can manually delete the node with
rothgar marked this conversation as resolved.
Show resolved Hide resolved

> Karpenter automatically adds a node finalizer to properly cordon and drain nodes before they are terminated.
```bash
kubectl delete node $NODE_NAME
```

### Cleanup
> To avoid additional costs make sure you delete all ec2 instances before deleting the other cluster resources.
```bash
helm delete karpenter -n karpenter
aws cloudformation delete-stack --stack-name Karpenter-${CLUSTER_NAME}
aws ec2 describe-launch-templates \
| jq -r ".LaunchTemplates[].LaunchTemplateName" \
| grep -i karpenter \
| grep -i Karpenter-${CLUSTER_NAME} \
| xargs -I{} aws ec2 delete-launch-template --launch-template-name {}
```

Expand Down
43 changes: 43 additions & 0 deletions docs/aws/eks-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${CLUSTER_NAME}
region: ${AWS_DEFAULT_REGION}

iam:
withOIDC: true
serviceAccounts:
- metadata:
name: karpenter
namespace: karpenter
attachPolicy:
Version: "2012-10-17"
Statement:
- Effect: Allow
Resource: "*"
Action:
# Write Operations
- "ec2:CreateLaunchTemplate"
- "ec2:CreateFleet"
- "ec2:RunInstances"
- "ec2:CreateTags"
- "iam:PassRole"
- "ec2:TerminateInstances"
# Read Operations
- "ec2:DescribeLaunchTemplates"
- "ec2:DescribeInstances"
- "ec2:DescribeSecurityGroups"
- "ec2:DescribeSubnets"
- "ec2:DescribeInstanceTypes"
- "ec2:DescribeInstanceTypeOfferings"
- "ec2:DescribeAvailabilityZones"
- "ssm:GetParameter"

fargateProfiles:
- name: karpenter
rothgar marked this conversation as resolved.
Show resolved Hide resolved
selectors:
- namespace: karpenter
- name: kube-system
selectors:
- namespace: kube-system
54 changes: 0 additions & 54 deletions docs/aws/karpenter.cloudformation.yaml
Original file line number Diff line number Diff line change
@@ -1,64 +1,10 @@
AWSTemplateFormatVersion: "2010-09-09"
Description: Resources used by https://github.com/awslabs/karpenter
Parameters:
OpenIDConnectIdentityProvider:
Type: String
Description: "Example oidc.eks.us-west-2.amazonaws.com/id/1234567890"
ClusterName:
Type: String
Description: "EKS cluster name"
Resources:
KarpenterControllerRole:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now users are forced to install karpenter using eksctl (to create this role). Thoughts on alternatives?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the example we're only showing eksctl (and how easy it is) We should absolutely document how to do this manually and what the requirements are (as well how to customize it). That all seemed like out of scope for the getting started guide.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

Type: "AWS::IAM::Role"
Properties:
RoleName: !Sub "KarpenterControllerRole-${ClusterName}"
Path: /
AssumeRolePolicyDocument: !Sub |
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS::AccountId}:oidc-provider/${OpenIDConnectIdentityProvider}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OpenIDConnectIdentityProvider}:aud": "sts.${AWS::URLSuffix}",
"${OpenIDConnectIdentityProvider}:sub": "system:serviceaccount:karpenter:karpenter"
}
}
}]
}
KarpenterControllerPolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: !Sub "KarpenterControllerPolicy-${ClusterName}"
Roles:
-
Ref: "KarpenterControllerRole"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Resource: "*"
Action:
# Write Operations
- "ec2:CreateLaunchTemplate"
- "ec2:CreateFleet"
- "ec2:RunInstances"
- "ec2:CreateTags"
- "iam:PassRole"
- "ec2:TerminateInstances"
# Read Operations
- "ec2:DescribeLaunchTemplates"
- "ec2:DescribeInstances"
- "ec2:DescribeSecurityGroups"
- "ec2:DescribeSubnets"
- "ec2:DescribeInstanceTypes"
- "ec2:DescribeInstanceTypeOfferings"
- "ec2:DescribeAvailabilityZones"
- "ssm:GetParameter"
KarpenterNodeInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Expand Down