diff --git a/content/karpenter/010_prerequisites/attach_workspaceiam.md b/content/karpenter/010_prerequisites/attach_workspaceiam.md deleted file mode 100644 index a206d197..00000000 --- a/content/karpenter/010_prerequisites/attach_workspaceiam.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "Attach the IAM role to your Workspace" -chapter: false -weight: 50 ---- - -{{% notice note %}} -**Select the tab** and follow the specific instructions depending on whether you are… -{{% /notice %}} - - -{{< tabs name="Region" >}} - {{< tab name="...ON YOUR OWN" include="on_your_own_updateiam.md" />}} - {{< tab name="...AT AN AWS EVENT" include="at_an_aws_updateiam.md" />}} -{{< /tabs >}} \ No newline at end of file diff --git a/content/karpenter/010_prerequisites/aws_event.md b/content/karpenter/010_prerequisites/aws_event.md index 94cdbca0..f342f489 100644 --- a/content/karpenter/010_prerequisites/aws_event.md +++ b/content/karpenter/010_prerequisites/aws_event.md @@ -24,36 +24,20 @@ If you are at an AWS event, an AWS account was created for you to use throughout You are now logged in to the AWS console in an account that was created for you, and will be available only throughout the workshop run time. {{% notice info %}} -In the interest of time for shorter events we sometimes deploy the resources required as a prerequisite for you. If you were told so, please review the cloudformation outputs of the stack that was deployed by **expanding the instructions below**. +In the interest of time we have deployed everything required to run Karpenter for this workshop. All the pre-requisites and dependencies have been deployed. The resources deployed can befound in this CloudFormation Template (**[eks-spot-workshop-quickstarter-cnf.yml](https://raw.githubusercontent.com/awslabs/ec2-spot-workshops/master/content/using_ec2_spot_instances_with_eks/010_prerequisites/prerequisites.files/eks-spot-workshop-quickstart-cnf.yml)**). The template deploys resourcess such as (a) An [AWS Cloud9](https://console.aws.amazon.com/cloud9) workspace with all the dependencies and IAM privileges to run the workshop (b) An EKS Cluster with the name `eksworkshop-eksctl` and (c) a [EKS managed node group](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) with 2 on-demand instances. {{% /notice %}} -{{%expand "Click to reveal detailed instructions" %}} +#### Getting access to Cloud9 -#### What resources are already deployed {#resources_deployed} - -We have deployed the below resources required to get started with the workshop using a CloudFormation Template (**[eks-spot-workshop-quickstarter-cnf.yml](https://raw.githubusercontent.com/awslabs/ec2-spot-workshops/master/content/using_ec2_spot_instances_with_eks/010_prerequisites/prerequisites.files/eks-spot-workshop-quickstart-cnf.yml)**), Please reference the below resources created by the stack. - -+ An [AWS Cloud9](https://console.aws.amazon.com/cloud9) workspace with - - An IAM role created and attached to the workspace with Administrator access - - Kubernetes tools installed (kubectl, jq and envsubst) - - awscli upgraded to v2 - - Created and imported a key pair to Amazon EC2 - - [eksctl](https://eksctl.io/) installed, The official CLI for Amazon EKS - -+ An EKS cluster with the name `eksworkshop-eksctl` and a [EKS managed node group](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) with 2 on-demand instances. - - -#### Use your resources - -In this workshop, you'll need to reference the resources created by the CloudFormation stack that we setup for you. +In this workshop, you'll need to reference the resources created by the CloudFormation stack. 1. On the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation), select the stack name that starts with **mod-** in the list. -1. In the stack details pane, click the **Outputs** tab. +2. In the stack details pane, click the **Outputs** tab. ![cnf_output](/images/karpenter/prerequisites/cnf_output.png) -It is recommended that you keep this window open so you can easily refer to the outputs and resources throughout the workshop. +It is recommended that you keep this tab / window open so you can easily refer to the outputs and resources throughout the workshop. {{% notice info %}} you will notice additional Cloudformation stacks were also deployed which is the result of the stack that starts with **mod-**. One to deploy the Cloud9 Workspace and two other to create the EKS cluster and managed nodegroup. @@ -78,9 +62,7 @@ aws sts get-caller-identity {{% insert-md-from-file file="karpenter/010_prerequisites/at_an_aws_validaterole.md" %}} -Since we have already setup the prerequisites, **you can head straight to [Test the Cluster]({{< relref "/karpenter/020_eksctl/test.md" >}})** - -{{% /expand%}} +You are now ready to **[Test the Cluster]({{< relref "/karpenter/test.md" >}})** diff --git a/content/karpenter/010_prerequisites/awscli.md b/content/karpenter/010_prerequisites/awscli.md deleted file mode 100644 index 3aeb060e..00000000 --- a/content/karpenter/010_prerequisites/awscli.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: "Update to the latest AWS CLI" -chapter: false -weight: 45 -comment: default install now includes aws-cli/1.15.83 ---- - -{{% notice tip %}} -For this workshop, please ignore warnings about the version of pip being used. -{{% /notice %}} - -1. Run the following command to view the current version of aws-cli: -``` -aws --version -``` - -1. Update to the latest version: -``` -curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" -unzip awscliv2.zip -sudo ./aws/install -. ~/.bash_profile -``` - -1. Confirm you have a newer version: -``` -aws --version -``` diff --git a/content/karpenter/010_prerequisites/k8stools.md b/content/karpenter/010_prerequisites/k8stools.md deleted file mode 100644 index 6e5bca8d..00000000 --- a/content/karpenter/010_prerequisites/k8stools.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: "Install Kubernetes Tools" -chapter: false -weight: 40 ---- - -Amazon EKS clusters require kubectl and kubelet binaries and the aws-cli or aws-iam-authenticator -binary to allow IAM authentication for your Kubernetes cluster. - -{{% notice tip %}} -In this workshop we will give you the commands to download the Linux -binaries. If you are running Mac OSX / Windows, please [see the official EKS docs -for the download links.](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) -{{% /notice %}} - -#### Install kubectl - -``` -export KUBECTL_VERSION=v1.23.7 -sudo curl --silent --location -o /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl -sudo chmod +x /usr/local/bin/kubectl -``` - -#### Enable Kubectl bash_completion - -``` -kubectl completion bash >> ~/.bash_completion -. /etc/profile.d/bash_completion.sh -. ~/.bash_completion -``` - -#### Set the AWS Load Balancer Controller version - -``` -echo 'export LBC_VERSION="v2.3.0"' >> ~/.bash_profile -. ~/.bash_profile -``` - -#### Install JQ and envsubst -``` -sudo yum -y install jq gettext bash-completion moreutils -``` - -#### Installing YQ for Yaml processing - -``` -echo 'yq() { - docker run --rm -i -v "${PWD}":/workdir mikefarah/yq "$@" -}' | tee -a ~/.bashrc && source ~/.bashrc -``` - -#### Verify the binaries are in the path and executable -``` -for command in kubectl jq envsubst - do - which $command &>/dev/null && echo "$command in path" || echo "$command NOT FOUND" - done -``` \ No newline at end of file diff --git a/content/karpenter/010_prerequisites/self_paced.md b/content/karpenter/010_prerequisites/self_paced.md index e5d5ce68..bb1194b3 100644 --- a/content/karpenter/010_prerequisites/self_paced.md +++ b/content/karpenter/010_prerequisites/self_paced.md @@ -8,7 +8,10 @@ weight: 10 Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re:Invent, Kubecon, Immersion Day, etc), go to [Start the workshop at an AWS event]({{< ref "/karpenter/010_prerequisites/aws_event.md" >}}). {{% /notice %}} -### Running the workshop on your own +## Running the workshop on your own + + +### Creating an account to run the workshop {{% notice warning %}} Your account must have the ability to create new IAM roles and scope other IAM permissions. @@ -33,5 +36,53 @@ as an IAM user with administrator access to the AWS account: 1. Take note of the login URL and save: ![Login URL](/images/karpenter/prerequisites/iam-4-save-url.png) +### Deploying CloudFormation + +In the interest of time and to focus just on karpenter, we will install everything required to run this Karpenter workshop using cloudformation. + +1. Download locally this cloudformation stack into a file (**[eks-spot-workshop-quickstarter-cnf.yml](https://raw.githubusercontent.com/awslabs/ec2-spot-workshops/master/content/using_ec2_spot_instances_with_eks/010_prerequisites/prerequisites.files/eks-spot-workshop-quickstart-cnf.yml)**). + +1. Go into the CloudFormation console and select the creation of a new stack. Select **Template is ready**, and then **Upload a template file**, then select the file that you downloaded to your computer and click on **Next** + +1. Fill in the **Stack Name** using 'karpenter-workshop', Leave all the settings in the parameters section with the default prarameters and click **Next** + +1. In the Configure Stack options just scroll to the bottom of the page and click **Next** + +1. Finally in the **Review karpenter-workshop** go to the bottom of the page and tick the `Capabilities` section *I acknowledge that AWS CloudFormation might create IAM resources.* then click **Create stack** + +{{% notice warning %}} +The deployment of this stack may take up to 20minutes. You should wait until all the resources in the cloudformation stack have been completed before you start the rest of the workshop. The template deploys resourcess such as (a) An [AWS Cloud9](https://console.aws.amazon.com/cloud9) workspace with all the dependencies and IAM privileges to run the workshop (b) An EKS Cluster with the name `eksworkshop-eksctl` and (c) a [EKS managed node group](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) with 2 on-demand instances. +{{% /notice %}} + +### Checking the completion of the stack deployment + +One way to check your stack has been fully deployed is to check that all the cloudformation dependencies are green and succedded in the cloudformation dashboard; This should look similar to the state below. + +![cnf_output](/images/karpenter/prerequisites/cfn_stak_completion.png) + +#### Getting access to Cloud9 + +In this workshop, you'll need to reference the resources created by the CloudFormation stack. + +1. On the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation), select the stack name that starts with **mod-** in the list. + +2. In the stack details pane, click the **Outputs** tab. + +![cnf_output](/images/karpenter/prerequisites/cnf_output.png) + +It is recommended that you keep this tab / window open so you can easily refer to the outputs and resources throughout the workshop. + +{{% notice info %}} +you will notice additional Cloudformation stacks were also deployed which is the result of the stack that starts with **mod-**. One to deploy the Cloud9 Workspace and two other to create the EKS cluster and managed nodegroup. +{{% /notice %}} + +#### Launch your Cloud9 workspace + +- Click on the url against `Cloud9IDE` from the outputs + +{{% insert-md-from-file file="karpenter/010_prerequisites/workspace_at_launch.md" %}} + +{{% insert-md-from-file file="karpenter/010_prerequisites/update_workspace_settings.md" %}} + -Once you have completed the step above, **you can head straight to [Create a Workspace]({{< ref "/karpenter/010_prerequisites/workspace.md" >}})** \ No newline at end of file +You are now ready to **[Test the Cluster]({{< relref "/karpenter/test.md" >}})** \ No newline at end of file diff --git a/content/karpenter/010_prerequisites/sshkey.md b/content/karpenter/010_prerequisites/sshkey.md deleted file mode 100644 index 2878c344..00000000 --- a/content/karpenter/010_prerequisites/sshkey.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: "Create an SSH key" -chapter: false -weight: 80 ---- - -{{% notice info %}} -Starting from here, when you see command to be entered such as below, you will enter these commands into Cloud9 IDE. You can use the **Copy to clipboard** feature (right hand upper corner) to simply copy and paste into Cloud9. In order to paste, you can use Ctrl + V for Windows or Command + V for Mac. -{{% /notice %}} - -Please run this command to generate SSH Key in Cloud9. This key will be used on the worker node instances to allow ssh access if necessary. - -``` -ssh-keygen -``` - -{{% notice tip %}} -Press `enter` 3 times to take the default choices -{{% /notice %}} - -Upload the public key to your EC2 region: - -``` -aws ec2 import-key-pair --key-name "eksworkshop" --public-key-material fileb://~/.ssh/id_rsa.pub -``` diff --git a/content/karpenter/010_prerequisites/update_workspaceiam.md b/content/karpenter/010_prerequisites/update_workspaceiam.md deleted file mode 100644 index 2ff11a27..00000000 --- a/content/karpenter/010_prerequisites/update_workspaceiam.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: "Update IAM settings for your Workspace" -chapter: false -weight: 60 ---- - -{{% notice info %}} -**Note**: Cloud9 normally manages IAM credentials dynamically. This isn't currently compatible with the EKS IAM authentication, so we will disable it and rely on the IAM role instead. -{{% /notice %}} - -- Return to your workspace and click the sprocket, or launch a new tab to open the Preferences tab -- Select **AWS SETTINGS** -- Turn off **AWS managed temporary credentials** -- Close the Preferences tab -![c9disableiam](/images/karpenter/prerequisites/c9disableiam.png) - -To ensure temporary credentials aren't already in place we will also remove -any existing credentials file: -``` -rm -vf ${HOME}/.aws/credentials -``` - -We should configure our aws cli with our current region as default: -``` -export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account) -export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region') - -echo "export ACCOUNT_ID=${ACCOUNT_ID}" >> ~/.bash_profile -echo "export AWS_REGION=${AWS_REGION}" >> ~/.bash_profile -aws configure set default.region ${AWS_REGION} -aws configure get default.region -``` - -### Validate the IAM role {#validate_iam} - -Use the [GetCallerIdentity](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) CLI command to validate that the Cloud9 IDE is using the correct IAM role. - -``` -aws sts get-caller-identity - -``` - -{{% notice note %}} -**Select the tab** and validate the assumed role… -{{% /notice %}} - -{{< tabs name="Region" >}} - {{< tab name="...AT AN AWS EVENT" include="at_an_aws_validaterole.md" />}} - {{< tab name="...ON YOUR OWN" include="on_your_own_validaterole.md" />}} - -{{< /tabs >}} - diff --git a/content/karpenter/010_prerequisites/workspace.md b/content/karpenter/010_prerequisites/workspace.md deleted file mode 100644 index 8c64c21a..00000000 --- a/content/karpenter/010_prerequisites/workspace.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "Create a Workspace" -chapter: false -weight: 30 ---- - -{{% notice warning %}} -If you are running the workshop on your own, the Cloud9 workspace should be built by an IAM user with Administrator privileges, not the root account user. Please ensure you are logged in as an IAM user, not the root -account user. -{{% /notice %}} - -{{% notice info %}} -If you are at an AWS hosted event (such as re:Invent, Kubecon, Immersion Day, or any other event hosted by -an AWS employee) follow the instructions on the region that should be used to launch resources -{{% /notice %}} - -{{% notice tip %}} -Ad blockers, javascript disablers, and tracking blockers should be disabled for -the cloud9 domain, or connecting to the workspace might be impacted. -Cloud9 requires third-party-cookies. You can whitelist the [specific domains]( https://docs.aws.amazon.com/cloud9/latest/user-guide/troubleshooting.html#troubleshooting-env-loading). -{{% /notice %}} - -### Launch Cloud9 in your closest region: - -{{< tabs name="Region" >}} - {{< tab name="N. Virginia" include="us-east-1.md" />}} - {{< tab name="Oregon" include="us-west-2.md" />}} - {{< tab name="Ireland" include="eu-west-1.md" />}} - {{< tab name="Ohio" include="us-east-2.md" />}} - {{< tab name="Singapore" include="ap-southeast-1.md" />}} -{{< /tabs >}} - -- Select **Create environment** -- Name it **eksworkshop**, and take all other defaults -- When it comes up, customize the environment by closing the **welcome tab** -and **lower work area**, and opening a new **terminal** tab in the main work area: -![c9before](/images/using_ec2_spot_instances_with_eks/prerequisites/c9before.png) - -- Your workspace should now look like this: -![c9after](/images/using_ec2_spot_instances_with_eks/prerequisites/c9after.png) - -- If you like this theme, you can choose it yourself by selecting **View / Themes / Solarized / Solarized Dark** -in the Cloud9 workspace menu. diff --git a/content/karpenter/020_eksctl/_index.md b/content/karpenter/020_eksctl/_index.md deleted file mode 100644 index 2b8cac9f..00000000 --- a/content/karpenter/020_eksctl/_index.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "Launch using eksctl" -chapter: true -weight: 20 ---- - -# Launch using [eksctl](https://eksctl.io/) - -[eksctl](https://eksctl.io) is the official CLI for Amazon EKS. It is written in Go, and uses CloudFormation. Eksctl is a tool jointly developed by AWS and [Weaveworks](https://weave.works) that automates much of the experience of creating EKS clusters. - -In this module, we will use eksctl to launch and configure our EKS cluster and nodes. - -{{< youtube jGrdVSlIkNQ >}} diff --git a/content/karpenter/020_eksctl/create_eks_cluster_eksctl_command.md b/content/karpenter/020_eksctl/create_eks_cluster_eksctl_command.md deleted file mode 100644 index a3576d43..00000000 --- a/content/karpenter/020_eksctl/create_eks_cluster_eksctl_command.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: "Create EKS cluster Command" -chapter: false -disableToc: true -hidden: true ---- - -Create an eksctl deployment file (eksworkshop.yaml) to create an EKS cluster: - - -``` -cat << EOF > eksworkshop.yaml ---- -apiVersion: eksctl.io/v1alpha5 -kind: ClusterConfig - -metadata: - name: eksworkshop-eksctl - region: ${AWS_REGION} - version: "1.21" - tags: - karpenter.sh/discovery: ${CLUSTER_NAME} -iam: - withOIDC: true -managedNodeGroups: -- amiFamily: AmazonLinux2 - instanceType: m5.large - name: mng-od-m5large - desiredCapacity: 2 - maxSize: 3 - minSize: 0 - labels: - alpha.eksctl.io/cluster-name: ${CLUSTER_NAME} - alpha.eksctl.io/nodegroup-name: mng-od-m5large - intent: control-apps - tags: - alpha.eksctl.io/nodegroup-name: mng-od-m5large - alpha.eksctl.io/nodegroup-type: managed - k8s.io/cluster-autoscaler/node-template/label/intent: control-apps - iam: - withAddonPolicies: - autoScaler: true - cloudWatch: true - albIngress: true - privateNetworking: true - -EOF -``` - -Next, use the file you created as the input for the eksctl cluster creation. - -``` -eksctl create cluster -f eksworkshop.yaml -``` - -{{% notice note %}} -Launching EKS and all the dependencies will take approximately 15 minutes -{{% /notice %}} \ No newline at end of file diff --git a/content/karpenter/020_eksctl/launcheks.md b/content/karpenter/020_eksctl/launcheks.md deleted file mode 100644 index 3842e4c6..00000000 --- a/content/karpenter/020_eksctl/launcheks.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: "Launch EKS" -date: 2018-08-07T13:34:24-07:00 -weight: 20 ---- - - -{{% notice warning %}} -**DO NOT PROCEED** with this step unless you have [validated the IAM role]({{< relref "../010_prerequisites/update_workspaceiam.md#validate_iam" >}}) in use by the Cloud9 IDE. You will not be able to run the necessary kubectl commands in the later modules unless the EKS cluster is built using the IAM role. -{{% /notice %}} - -#### Challenge: -**How do I check the IAM role on the workspace?** - -{{%expand "Expand here to see the solution" %}} - -### Validate the IAM role {#validate_iam} - -Use the [GetCallerIdentity](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) CLI command to validate that the Cloud9 IDE is using the correct IAM role. - -``` -aws sts get-caller-identity - -``` - -You can verify what the output an correct role shoulld be in the **[validate the IAM role section]({{< relref "../010_prerequisites/update_workspaceiam.md" >}})**. If you do see the correct role, proceed to next step to create an EKS cluster. -{{% /expand %}} - - -### Create an EKS cluster - -Create an eksctl deployment file (eksworkshop.yaml) to create an EKS cluster: - - -``` -cat << EOF > eksworkshop.yaml ---- -apiVersion: eksctl.io/v1alpha5 -kind: ClusterConfig - -metadata: - name: eksworkshop-eksctl - region: ${AWS_REGION} - version: "1.23" - tags: - karpenter.sh/discovery: eksworkshop-eksctl -iam: - withOIDC: true -managedNodeGroups: -- amiFamily: AmazonLinux2 - instanceType: m5.large - name: mng-od-m5large - desiredCapacity: 2 - maxSize: 3 - minSize: 0 - labels: - alpha.eksctl.io/cluster-name: eksworkshop-eksctl - alpha.eksctl.io/nodegroup-name: mng-od-m5large - intent: control-apps - tags: - alpha.eksctl.io/nodegroup-name: mng-od-m5large - alpha.eksctl.io/nodegroup-type: managed - k8s.io/cluster-autoscaler/node-template/label/intent: control-apps - iam: - withAddonPolicies: - autoScaler: true - cloudWatch: true - albIngress: true - privateNetworking: true - -EOF -``` - -Next, use the file you created as the input for the eksctl cluster creation. - -``` -eksctl create cluster -f eksworkshop.yaml -``` - -{{% notice info %}} -Launching EKS and all the dependencies will take approximately 15 minutes -{{% /notice %}} - -`eksctl create cluster` command allows you to create the cluster and managed nodegroups in sequence. There are a few things to note in the configuration that we just used to create the cluster and a managed nodegroup. - - * Resources created by `eksctl` have the tag `karpenter.sh/discovery` with the cluster name as the value. We'll need this later. - * Nodegroup configurations are set under the **managedNodeGroups** section, this indicates that the node group is managed by EKS. - * Nodegroup instance type is **m5.large** with **minSize** to 0, **maxSize** to 3 and **desiredCapacity** to 2. This nodegroup has capacity type set to On-Demand Instances by default. - - * Notice that the we add 3 node labels: - * **alpha.eksctl.io/cluster-name**, to indicate the nodes belong to **eksworkshop-eksctl** cluster. - * **alpha.eksctl.io/nodegroup-name**, to indicate the nodes belong to **mng-od-m5large** nodegroup. - * **intent**, to allow you to deploy control applications on nodes that have been labeled with value **control-apps** - - * Amazon EKS adds an additional Kubernetes label **eks.amazonaws.com/capacityType: ON_DEMAND**, to all On-Demand Instances in your managed node group. You can use this label to schedule stateful applications on On-Demand nodes. \ No newline at end of file diff --git a/content/karpenter/020_eksctl/prerequisites.md b/content/karpenter/020_eksctl/prerequisites.md deleted file mode 100644 index de5c47b4..00000000 --- a/content/karpenter/020_eksctl/prerequisites.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: "Prerequisites" -date: 2018-08-07T13:31:55-07:00 -weight: 10 ---- - -For this module, we need to download the [eksctl](https://eksctl.io/) binary: -``` -export EKSCTL_VERSION=v0.110.0 -curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/${EKSCTL_VERSION}/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp - -sudo mv -v /tmp/eksctl /usr/local/bin -``` - -Confirm the eksctl command works: -``` -eksctl version -``` diff --git a/content/karpenter/030_k8s_tools/_index.md b/content/karpenter/040_k8s_tools/_index.md similarity index 96% rename from content/karpenter/030_k8s_tools/_index.md rename to content/karpenter/040_k8s_tools/_index.md index fe2618ad..51d29dcd 100644 --- a/content/karpenter/030_k8s_tools/_index.md +++ b/content/karpenter/040_k8s_tools/_index.md @@ -1,7 +1,7 @@ --- title: "Install Kubernetes Tools" chapter: true -weight: 30 +weight: 40 --- # Install Kubernetes tools diff --git a/content/karpenter/030_k8s_tools/deploy_metric_server.md b/content/karpenter/040_k8s_tools/deploy_metric_server.md similarity index 100% rename from content/karpenter/030_k8s_tools/deploy_metric_server.md rename to content/karpenter/040_k8s_tools/deploy_metric_server.md diff --git a/content/karpenter/030_k8s_tools/helm_deploy.md b/content/karpenter/040_k8s_tools/helm_deploy.md similarity index 100% rename from content/karpenter/030_k8s_tools/helm_deploy.md rename to content/karpenter/040_k8s_tools/helm_deploy.md diff --git a/content/karpenter/030_k8s_tools/install_kube_ops_view.md b/content/karpenter/040_k8s_tools/install_kube_ops_view.md similarity index 100% rename from content/karpenter/030_k8s_tools/install_kube_ops_view.md rename to content/karpenter/040_k8s_tools/install_kube_ops_view.md diff --git a/content/karpenter/030_k8s_tools/k8_tools.files/kube_ops_view/deployment.yaml b/content/karpenter/040_k8s_tools/k8_tools.files/kube_ops_view/deployment.yaml similarity index 100% rename from content/karpenter/030_k8s_tools/k8_tools.files/kube_ops_view/deployment.yaml rename to content/karpenter/040_k8s_tools/k8_tools.files/kube_ops_view/deployment.yaml diff --git a/content/karpenter/030_k8s_tools/k8_tools.files/kube_ops_view/kustomization.yaml b/content/karpenter/040_k8s_tools/k8_tools.files/kube_ops_view/kustomization.yaml similarity index 100% rename from content/karpenter/030_k8s_tools/k8_tools.files/kube_ops_view/kustomization.yaml rename to content/karpenter/040_k8s_tools/k8_tools.files/kube_ops_view/kustomization.yaml diff --git a/content/karpenter/030_k8s_tools/k8_tools.files/kube_ops_view/rbac.yaml b/content/karpenter/040_k8s_tools/k8_tools.files/kube_ops_view/rbac.yaml similarity index 100% rename from content/karpenter/030_k8s_tools/k8_tools.files/kube_ops_view/rbac.yaml rename to content/karpenter/040_k8s_tools/k8_tools.files/kube_ops_view/rbac.yaml diff --git a/content/karpenter/030_k8s_tools/k8_tools.files/kube_ops_view/service.yaml b/content/karpenter/040_k8s_tools/k8_tools.files/kube_ops_view/service.yaml similarity index 100% rename from content/karpenter/030_k8s_tools/k8_tools.files/kube_ops_view/service.yaml rename to content/karpenter/040_k8s_tools/k8_tools.files/kube_ops_view/service.yaml diff --git a/content/karpenter/040_karpenter/_index.md b/content/karpenter/050_karpenter/_index.md similarity index 97% rename from content/karpenter/040_karpenter/_index.md rename to content/karpenter/050_karpenter/_index.md index f12d1220..235a5403 100644 --- a/content/karpenter/040_karpenter/_index.md +++ b/content/karpenter/050_karpenter/_index.md @@ -2,7 +2,7 @@ title: "Karpenter" titleMenu: "Karpenter" chapter: true -weight: 40 +weight: 50 draft: false --- diff --git a/content/karpenter/040_karpenter/advanced_provisioner.md b/content/karpenter/050_karpenter/advanced_provisioner.md similarity index 100% rename from content/karpenter/040_karpenter/advanced_provisioner.md rename to content/karpenter/050_karpenter/advanced_provisioner.md diff --git a/content/karpenter/040_karpenter/automatic_node_provisioning.md b/content/karpenter/050_karpenter/automatic_node_provisioning.md similarity index 100% rename from content/karpenter/040_karpenter/automatic_node_provisioning.md rename to content/karpenter/050_karpenter/automatic_node_provisioning.md diff --git a/content/karpenter/040_karpenter/consolidation.md b/content/karpenter/050_karpenter/consolidation.md similarity index 98% rename from content/karpenter/040_karpenter/consolidation.md rename to content/karpenter/050_karpenter/consolidation.md index 04dbda0f..7d05303c 100644 --- a/content/karpenter/040_karpenter/consolidation.md +++ b/content/karpenter/050_karpenter/consolidation.md @@ -92,7 +92,8 @@ kubectl scale deployment inflate --replicas 6 As for what should happen, check out karpenter logs, remember you can read karpenter logs using the following command ``` -kubectl logs -f deployment/karpenter -n karpenter -c controller --tail=20 +alias kl='for pod in $(kubectl get pods -n karpenter | grep karpenter | awk NF=1) ; do if [[ $(kubectl logs ${pod} -c controller -n karpenter --limit-bytes=4096) =~ .*acquired.* ]]; then kubectl logs ${pod} -c controller -n karpenter -f --tail=20; fi; done' +kl ``` Karpenter logs will display the following lines diff --git a/content/karpenter/040_karpenter/ec2_spot_deployments.md b/content/karpenter/050_karpenter/ec2_spot_deployments.md similarity index 100% rename from content/karpenter/040_karpenter/ec2_spot_deployments.md rename to content/karpenter/050_karpenter/ec2_spot_deployments.md diff --git a/content/karpenter/040_karpenter/install_karpenter.md b/content/karpenter/050_karpenter/install_karpenter.md similarity index 100% rename from content/karpenter/040_karpenter/install_karpenter.md rename to content/karpenter/050_karpenter/install_karpenter.md diff --git a/content/karpenter/040_karpenter/multiple_architectures.md b/content/karpenter/050_karpenter/multiple_architectures.md similarity index 96% rename from content/karpenter/040_karpenter/multiple_architectures.md rename to content/karpenter/050_karpenter/multiple_architectures.md index 099628d6..6cc48f44 100644 --- a/content/karpenter/040_karpenter/multiple_architectures.md +++ b/content/karpenter/050_karpenter/multiple_architectures.md @@ -118,7 +118,8 @@ Before we check the selected node, let's cover what Karpenter is expected to do Let's confirm that was the case and only `amd64` considered for scaling up. We can check karpenter logs by running the following command. ``` -kubectl logs -f deployment/karpenter -c controller -n karpenter +alias kl='for pod in $(kubectl get pods -n karpenter | grep karpenter | awk NF=1) ; do if [[ $(kubectl logs ${pod} -c controller -n karpenter --limit-bytes=4096) =~ .*acquired.* ]]; then kubectl logs ${pod} -c controller -n karpenter -f --tail=20; fi; done' +kl ``` The output should show something similar to the lines below @@ -194,7 +195,8 @@ Karpenter does support the nodeSelector well-known label `node.kubernetes.io/ins So in this case we should expect just one instance being considered. You can check Karpenter logs by running: ``` -kubectl logs -f deployment/karpenter -c controller -n karpenter +alias kl='for pod in $(kubectl get pods -n karpenter | grep karpenter | awk NF=1) ; do if [[ $(kubectl logs ${pod} -c controller -n karpenter --limit-bytes=4096) =~ .*acquired.* ]]; then kubectl logs ${pod} -c controller -n karpenter -f --tail=20; fi; done' +kl ``` The output should show something similar to the lines below diff --git a/content/karpenter/040_karpenter/set_up_the_environment.md b/content/karpenter/050_karpenter/set_up_the_environment.md similarity index 100% rename from content/karpenter/040_karpenter/set_up_the_environment.md rename to content/karpenter/050_karpenter/set_up_the_environment.md diff --git a/content/karpenter/040_karpenter/set_up_the_provisioner.md b/content/karpenter/050_karpenter/set_up_the_provisioner.md similarity index 90% rename from content/karpenter/040_karpenter/set_up_the_provisioner.md rename to content/karpenter/050_karpenter/set_up_the_provisioner.md index 9b01c00c..9acb50fd 100644 --- a/content/karpenter/040_karpenter/set_up_the_provisioner.md +++ b/content/karpenter/050_karpenter/set_up_the_provisioner.md @@ -84,10 +84,16 @@ Karpenter has been designed to be generic and support other Cloud and Infrastruc You can create a new terminal window within Cloud9 and leave the command below running so you can come back to that terminal every time you want to look for what Karpenter is doing. {{% /notice %}} -To read Karpenter logs from the console you can run the following command. +To read karpenter logs you first need to find the pod that act as elected leader and get the logs out from it. The following line setup an alias that you can use to automate that. The alias just looks for the headers of all the Karpenter controller logs, search for the pod that has the elected leader message and start streaming the line. ``` -kubectl logs -f deployment/karpenter -c controller -n karpenter +alias kl='for pod in $(kubectl get pods -n karpenter | grep karpenter | awk NF=1) ; do if [[ $(kubectl logs ${pod} -c controller -n karpenter --limit-bytes=4096) =~ .*acquired.* ]]; then kubectl logs ${pod} -c controller -n karpenter -f --tail=20; fi; done' +``` + +From now on to invoke the alias and get the logs we can just use + +``` +kl ``` {{% notice info %}} diff --git a/content/karpenter/040_karpenter/using_alternative_provisioners.md b/content/karpenter/050_karpenter/using_alternative_provisioners.md similarity index 97% rename from content/karpenter/040_karpenter/using_alternative_provisioners.md rename to content/karpenter/050_karpenter/using_alternative_provisioners.md index c291ecd3..14d388ba 100644 --- a/content/karpenter/040_karpenter/using_alternative_provisioners.md +++ b/content/karpenter/050_karpenter/using_alternative_provisioners.md @@ -125,7 +125,8 @@ But there is something that does not match with what we have seen so far with Ka Well, let's check first Karpenter log. ``` -kubectl logs -f deployment/karpenter -c controller -n karpenter +alias kl='for pod in $(kubectl get pods -n karpenter | grep karpenter | awk NF=1) ; do if [[ $(kubectl logs ${pod} -c controller -n karpenter --limit-bytes=4096) =~ .*acquired.* ]]; then kubectl logs ${pod} -c controller -n karpenter -f --tail=20; fi; done' +kl ``` The output of Karpenter should look similar to the one below diff --git a/content/karpenter/050_scaling/_index.md b/content/karpenter/060_scaling/_index.md similarity index 99% rename from content/karpenter/050_scaling/_index.md rename to content/karpenter/060_scaling/_index.md index 0f4fc965..019f0f4c 100644 --- a/content/karpenter/050_scaling/_index.md +++ b/content/karpenter/060_scaling/_index.md @@ -1,7 +1,7 @@ --- title: "Scaling App and Cluster" chapter: true -weight: 50 +weight: 60 --- # Implement AutoScaling with HPA and Karpenter diff --git a/content/karpenter/050_scaling/build_and_push_to_ecr.md b/content/karpenter/060_scaling/build_and_push_to_ecr.md similarity index 100% rename from content/karpenter/050_scaling/build_and_push_to_ecr.md rename to content/karpenter/060_scaling/build_and_push_to_ecr.md diff --git a/content/karpenter/050_scaling/deploy_hpa.md b/content/karpenter/060_scaling/deploy_hpa.md similarity index 100% rename from content/karpenter/050_scaling/deploy_hpa.md rename to content/karpenter/060_scaling/deploy_hpa.md diff --git a/content/karpenter/050_scaling/fis_experiment.md b/content/karpenter/060_scaling/fis_experiment.md similarity index 100% rename from content/karpenter/050_scaling/fis_experiment.md rename to content/karpenter/060_scaling/fis_experiment.md diff --git a/content/karpenter/050_scaling/monte_carlo_pi.md b/content/karpenter/060_scaling/monte_carlo_pi.md similarity index 95% rename from content/karpenter/050_scaling/monte_carlo_pi.md rename to content/karpenter/060_scaling/monte_carlo_pi.md index 35b9a3b1..14d10e5d 100644 --- a/content/karpenter/050_scaling/monte_carlo_pi.md +++ b/content/karpenter/060_scaling/monte_carlo_pi.md @@ -104,7 +104,8 @@ kubectl describe provisioner default We can confirm the statements above by checking Karpenter logs using the following command. By now you should be very familiar with the log lines expected. ``` -kubectl logs -f deployment/karpenter -c controller -n karpenter +alias kl='for pod in $(kubectl get pods -n karpenter | grep karpenter | awk NF=1) ; do if [[ $(kubectl logs ${pod} -c controller -n karpenter --limit-bytes=4096) =~ .*acquired.* ]]; then kubectl logs ${pod} -c controller -n karpenter -f --tail=20; fi; done' +kl ``` Or by runnint the following command to verify the details of the Spot instance created. diff --git a/content/karpenter/050_scaling/test_hpa.md b/content/karpenter/060_scaling/test_hpa.md similarity index 100% rename from content/karpenter/050_scaling/test_hpa.md rename to content/karpenter/060_scaling/test_hpa.md diff --git a/content/karpenter/020_eksctl/console_credentials.md b/content/karpenter/console_credentials.md similarity index 99% rename from content/karpenter/020_eksctl/console_credentials.md rename to content/karpenter/console_credentials.md index c6efbfc6..9b890e00 100644 --- a/content/karpenter/020_eksctl/console_credentials.md +++ b/content/karpenter/console_credentials.md @@ -1,7 +1,7 @@ --- title: "EKS Console Credentials" date: 2018-08-07T13:36:57-07:00 -weight: 40 +weight: 30 --- In this section we will set up the configuration you need to explore the Elastic Kubernetes Service (EKS) section in the AWS Console and the properties of the newly created EKS cluster. diff --git a/content/karpenter/020_eksctl/test.md b/content/karpenter/test.md similarity index 99% rename from content/karpenter/020_eksctl/test.md rename to content/karpenter/test.md index ff1f14dc..8f35d89c 100644 --- a/content/karpenter/020_eksctl/test.md +++ b/content/karpenter/test.md @@ -1,7 +1,7 @@ --- title: "Test the Cluster" date: 2018-08-07T13:36:57-07:00 -weight: 30 +weight: 20 --- ## Test the cluster: Confirm your Nodes, if we see 2 nodes then we know we have authenticated correctly: diff --git a/static/images/karpenter/prerequisites/cfn_stak_completion.png b/static/images/karpenter/prerequisites/cfn_stak_completion.png new file mode 100644 index 00000000..a986f784 Binary files /dev/null and b/static/images/karpenter/prerequisites/cfn_stak_completion.png differ