diff --git a/website/content/en/docs/concepts/nodeclasses.md b/website/content/en/docs/concepts/nodeclasses.md index bd7a70d50629..a9a4c580d951 100644 --- a/website/content/en/docs/concepts/nodeclasses.md +++ b/website/content/en/docs/concepts/nodeclasses.md @@ -206,7 +206,7 @@ status: status: "True" type: Ready ``` -Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.0.0/examples/v1/). +Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1/). ## spec.kubelet @@ -399,7 +399,7 @@ AMIFamily does not impact which AMI is discovered, only the UserData generation {{% alert title="Ubuntu Support Dropped at v1" color="warning" %}} -Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.0`. +Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.1`. This means Karpenter no longer supports automatic AMI discovery and UserData generation for Ubuntu. To continue using Ubuntu AMIs, you will need to select Ubuntu AMIs using `amiSelectorTerms`. @@ -1007,7 +1007,7 @@ spec: chown -R ec2-user ~ec2-user/.ssh ``` -For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.0.0/examples/v1). +For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1). Karpenter will merge the userData you specify with the default userData for that AMIFamily. See the [AMIFamily]({{< ref "#specamifamily" >}}) section for more details on these defaults. View the sections below to understand the different merge strategies for each AMIFamily. diff --git a/website/content/en/docs/concepts/nodepools.md b/website/content/en/docs/concepts/nodepools.md index 0e099bccff39..b212a722e4c1 100644 --- a/website/content/en/docs/concepts/nodepools.md +++ b/website/content/en/docs/concepts/nodepools.md @@ -27,7 +27,7 @@ Here are things you should know about NodePools: Objects for setting Kubelet features have been moved from the NodePool spec to the EC2NodeClasses spec, to not require other Karpenter providers to support those features. {{% /alert %}} -For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.0.0/examples/v1/). +For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.0.1/examples/v1/). ```yaml apiVersion: karpenter.sh/v1 diff --git a/website/content/en/docs/faq.md b/website/content/en/docs/faq.md index 2318827a4dfe..5f75c2cb0450 100644 --- a/website/content/en/docs/faq.md +++ b/website/content/en/docs/faq.md @@ -17,7 +17,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well. ### Can I write my own cloud provider for Karpenter? -Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.0.0/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. +Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.0.1/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. ### What operating system nodes does Karpenter deploy? Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}). @@ -29,7 +29,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}). ### What RBAC access is required? -All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/role.yaml) files for details. +All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/role.yaml) files for details. ### Can I run Karpenter outside of a Kubernetes cluster? Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API. diff --git a/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md b/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md index 14879bbe720e..8b64b39ae293 100644 --- a/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md +++ b/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md @@ -12,7 +12,7 @@ Karpenter automatically provisions new nodes in response to unschedulable pods. This guide shows how to get started with Karpenter by creating a Kubernetes cluster and installing Karpenter. To use Karpenter, you must be running a supported Kubernetes cluster on a supported cloud provider. -The guide below explains how to utilize the [Karpenter provider for AWS](https://github.com/aws/karpenter-provider-aws) with EKS. +The guide below explains how to utilize the [Karpenter provider for AWS](https://github.com/aws/karpenter-provider-aws) with EKS. See the [AKS Node autoprovisioning article](https://learn.microsoft.com/azure/aks/node-autoprovision) on how to use Karpenter on Azure's AKS or go to the [Karpenter provider for Azure open source repository](https://github.com/Azure/karpenter-provider-azure) for self-hosting on Azure and additional information. @@ -48,7 +48,7 @@ After setting up the tools, set the Karpenter and Kubernetes version: ```bash export KARPENTER_NAMESPACE="kube-system" -export KARPENTER_VERSION="1.0.0" +export KARPENTER_VERSION="1.0.1" export K8S_VERSION="1.30" ``` @@ -115,13 +115,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/ As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command. ```bash -cosign verify public.ecr.aws/karpenter/karpenter:1.0.0 \ +cosign verify public.ecr.aws/karpenter/karpenter:1.0.1 \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ --certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \ --certificate-github-workflow-repository=aws/karpenter-provider-aws \ --certificate-github-workflow-name=Release \ - --certificate-github-workflow-ref=refs/tags/v1.0.0 \ - --annotations version=1.0.0 + --certificate-github-workflow-ref=refs/tags/v1.0.1 \ + --annotations version=1.0.1 ``` {{% alert title="DNS Policy Notice" color="warning" %}} diff --git a/website/content/en/docs/getting-started/migrating-from-cas/_index.md b/website/content/en/docs/getting-started/migrating-from-cas/_index.md index 5ae3fea172ca..36baf75a2b9f 100644 --- a/website/content/en/docs/getting-started/migrating-from-cas/_index.md +++ b/website/content/en/docs/getting-started/migrating-from-cas/_index.md @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group. First set the Karpenter release you want to deploy. ```bash -export KARPENTER_VERSION="1.0.0" +export KARPENTER_VERSION="1.0.1" ``` We can now generate a full Karpenter deployment yaml from the Helm chart. @@ -132,7 +132,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t ## Create default NodePool -We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.0.0/examples/v1) for specific needs. +We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.0.1/examples/v1) for specific needs. {{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step10-create-nodepool.sh" language="bash" %}} diff --git a/website/content/en/docs/reference/cloudformation.md b/website/content/en/docs/reference/cloudformation.md index f9e0e7d4190b..76d474bc1bec 100644 --- a/website/content/en/docs/reference/cloudformation.md +++ b/website/content/en/docs/reference/cloudformation.md @@ -17,7 +17,7 @@ These descriptions should allow you to understand: To download a particular version of `cloudformation.yaml`, set the version and use `curl` to pull the file to your local system: ```bash -export KARPENTER_VERSION="1.0.0" +export KARPENTER_VERSION="1.0.1" curl https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > cloudformation.yaml ``` diff --git a/website/content/en/docs/reference/threat-model.md b/website/content/en/docs/reference/threat-model.md index 8625ca478002..b79862bd94ea 100644 --- a/website/content/en/docs/reference/threat-model.md +++ b/website/content/en/docs/reference/threat-model.md @@ -31,11 +31,11 @@ A Cluster Developer has the ability to create pods via `Deployments`, `ReplicaSe Karpenter has permissions to create and manage cloud instances. Karpenter has Kubernetes API permissions to create, update, and remove nodes, as well as evict pods. For a full list of the permissions, see the RBAC rules in the helm chart template. Karpenter also has AWS IAM permissions to create instances with IAM roles. -* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/aggregate-clusterrole.yaml) -* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/clusterrole-core.yaml) -* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/clusterrole.yaml) -* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/rolebinding.yaml) -* [role.yaml](https://github.com/aws/karpenter/blob/v1.0.0/charts/karpenter/templates/role.yaml) +* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/aggregate-clusterrole.yaml) +* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole-core.yaml) +* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/clusterrole.yaml) +* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/rolebinding.yaml) +* [role.yaml](https://github.com/aws/karpenter/blob/v1.0.1/charts/karpenter/templates/role.yaml) ## Assumptions diff --git a/website/content/en/docs/upgrading/upgrade-guide.md b/website/content/en/docs/upgrading/upgrade-guide.md index a4dcaf3ad1c1..3259045631dd 100644 --- a/website/content/en/docs/upgrading/upgrade-guide.md +++ b/website/content/en/docs/upgrading/upgrade-guide.md @@ -11,7 +11,7 @@ Use your existing upgrade mechanisms to upgrade your core add-ons in Kubernetes This guide contains information needed to upgrade to the latest release of Karpenter, along with compatibility issues you need to be aware of when upgrading from earlier Karpenter versions. {{% alert title="Warning" color="warning" %}} -With the release of Karpenter v1.0.0, the Karpenter team will be dropping support for karpenter versions v0.32 and below. We recommend upgrading to the latest version of Karpenter and keeping Karpenter up-to-date for bug fixes and new features. +With the release of Karpenter v1.0.1, the Karpenter team will be dropping support for karpenter versions v0.32 and below. We recommend upgrading to the latest version of Karpenter and keeping Karpenter up-to-date for bug fixes and new features. {{% /alert %}} ### CRD Upgrades diff --git a/website/content/en/docs/upgrading/v1-migration.md b/website/content/en/docs/upgrading/v1-migration.md index b21274d35172..fd1eb7944308 100644 --- a/website/content/en/docs/upgrading/v1-migration.md +++ b/website/content/en/docs/upgrading/v1-migration.md @@ -9,15 +9,14 @@ description: > This migration guide is designed to help you migrate Karpenter from v1beta1 APIs to v1 (v0.33-v0.37). Use this document as a reference to the changes that were introduced in this release and as a guide to how you need to update the manifests and other Karpenter objects you created in previous Karpenter releases. -Before you begin upgrading to `v1.0.0`, you should know that: +Before you begin upgrading to `v1.0`, you should know that: -* Every Karpenter upgrade from pre-v1.0.0 versions must upgrade to minor version `v1.0.0`. -* You must be upgrading to `v1.0.0` from a version of Karpenter that only supports v1beta1 APIs, e.g. NodePools, NodeClaims, and NodeClasses (v0.33+). -* Karpenter `v1.0.0`+ supports Karpenter v1 and v1beta1 APIs and will not work with earlier Provisioner, AWSNodeTemplate or Machine v1alpha1 APIs. Do not upgrade to `v1.0.0`+ without first [upgrading to `0.32.x`]({{}}) or later and then upgrading to v0.33. -* Version `v1.0.0` adds [conversion webhooks](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion) to automatically pull the v1 API version of previously applied v1beta1 NodePools, EC2NodeClasses, and NodeClaims. Karpenter will stop serving the v1beta1 API version at v1.1.0 and will drop the conversion webhooks at that time. You will need to migrate all stored manifests to v1 API versions on Karpenter v1.0+. Keep in mind that this is a conversion and not dual support, which means that resources are updated in-place rather than migrated over from the previous version. +* Every Karpenter upgrade from pre-v1.0 versions must upgrade to minor version `v1.0`. +* You must be upgrading to `v1.0` from a version of Karpenter that only supports v1beta1 APIs, e.g. NodePools, NodeClaims, and NodeClasses (v0.33+). +* Karpenter `v1.0`+ supports Karpenter v1 and v1beta1 APIs and will not work with earlier Provisioner, AWSNodeTemplate or Machine v1alpha1 APIs. Do not upgrade to `v1.0`+ without first [upgrading to `0.32.x`]({{}}) or later and then upgrading to v0.33. +* Version `v1.0` adds [conversion webhooks](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion) to automatically pull the v1 API version of previously applied v1beta1 NodePools, EC2NodeClasses, and NodeClaims. Karpenter will stop serving the v1beta1 API version at v1.1 and will drop the conversion webhooks at that time. You will need to migrate all stored manifests to v1 API versions on Karpenter v1.0+. Keep in mind that this is a conversion and not dual support, which means that resources are updated in-place rather than migrated over from the previous version. * If you need to rollback the upgrade to v1, you need to upgrade to a special patch version of the minor version you came from. For instance, if you came from v0.33.5, you'll need to downgrade back to v0.33.6. More details on how to do this in [Downgrading]({{}}). * Validate that you are running at least Kubernetes 1.25. Use the [compatibility matrix]({{}}) to confirm you are on a supported Kubernetes version. -* Karpenter runs a helm post-install-hook as part of upgrading to and from v1.0.0. If you're running Karpenter on a non x86_64 node, you'll need to update your `values.postInstallHook.image` values in your helm `values.yaml` file to point to a compatible image with kubectl. For instance, [an ARM compatible version](https://hub.docker.com/layers/bitnami/kubectl/1.30/images/sha256-d63c6609dd5c336fd036bd303fd4ce5f272e73ddd1923d32c12d62b7149067ed?context=explore). See the [Changelog]({{}}) for details about actions you should take before upgrading to v1.0 or v1.1. @@ -26,7 +25,7 @@ See the [Changelog]({{}}) for details about actions you shoul Please read through the entire procedure before beginning the upgrade. There are major changes in this upgrade, so please evaluate the list of breaking changes before continuing. {{% alert title="Note" color="warning" %}} -The upgrade guide will first require upgrading to your latest patch version prior to upgrade to v1.0.0. This will be to allow the conversion webhooks to operate and minimize downtime of the Karpenter controller when requesting the Karpenter custom resources. +The upgrade guide will first require upgrading to your latest patch version prior to upgrade to v1.0. This will be to allow the conversion webhooks to operate and minimize downtime of the Karpenter controller when requesting the Karpenter custom resources. {{% /alert %}} 1. Set environment variables for your cluster to upgrade to the latest patch version of the current Karpenter version you're running on: @@ -53,34 +52,50 @@ The upgrade guide will first require upgrading to your latest patch version prio The Karpenter version you are running must be between minor version `v0.33` and `v0.37`. To be able to roll back from Karpenter v1, you must rollback to on the following patch release versions for your minor version, which will include the conversion webhooks for a smooth rollback: - * v0.37.1 - * v0.36.3 - * v0.35.6 - * v0.34.7 - * v0.33.6 + * v0.37.2 + * v0.36.4 + * v0.35.7 + * v0.34.8 + * v0.33.7 3. Review for breaking changes between v0.33 and v0.37: If you are already running Karpenter v0.37.x, you can skip this step. If you are running an earlier Karpenter version, you need to review the [Upgrade Guide]({{}}) for each minor release. -4. Set environment variables for upgrading to the latest patch version. Note that `v0.33.6` and `v0.34.7` both need to include the v prefix, whereas `v0.35+` should not. +4. Set environment variables for upgrading to the latest patch version. Note that `v0.33` and `v0.34` both need to include the v prefix, whereas `v0.35+` should not. - ```bash - export KARPENTER_VERSION= - ``` + ```bash + export KARPENTER_VERSION= + ``` -6. Apply the latest patch version of your current minor version's Custom Resource Definitions (CRDs): +5. Apply the latest patch version of your current minor version's Custom Resource Definitions (CRDs): ```bash helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ --set webhook.enabled=true \ - --set webhook.serviceName=karpenter \ - --set webhook.serviceNamespace="${KARPENTER_NAMESPACE}" \ + --set webhook.serviceName="karpenter" \ --set webhook.port=8443 ``` {{% alert title="Note" color="warning" %}} If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. {{% /alert %}} -7. Upgrade Karpenter to the latest patch version of your current minor version's. At the end of this step, conversion webhooks will run but will not convert any version. +{{% alert title="Note" color="warning" %}} + +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: + +```bash +export SERVICE_NAME= +export SERVICE_NAMESPACE= +export SERVICE_PORT= +# NodePools +kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# NodeClaims +kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# EC2NodeClass +kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +``` +{{% /alert %}} + +6. Upgrade Karpenter to the latest patch version of your current minor version's. At the end of this step, conversion webhooks will run but will not convert any version. ```bash # Service account annotation can be dropped when using pod identity @@ -97,14 +112,14 @@ If you receive a `label validation error` or `annotation validation error` consu --wait ``` -8. Set environment variables for first upgrading to v1.0.0 +7. Set environment variables for first upgrading to v1.0.1 ```bash - export KARPENTER_VERSION=1.0.0 + export KARPENTER_VERSION=1.0.1 ``` -9. Update your existing policy using the following to the v1.0.0 controller policy: +8. Update your existing policy using the following to the v1.0.1 controller policy: Notable Changes to the IAM Policy include additional tag-scoping for the `eks:eks-cluster-name` tag for instances and instance profiles. ```bash @@ -117,13 +132,12 @@ If you receive a `label validation error` or `annotation validation error` consu --parameter-overrides "ClusterName=${CLUSTER_NAME}" ``` -10. Apply the v1.0.0 Custom Resource Definitions (CRDs): +9. Apply the v1.0.1 Custom Resource Definitions (CRDs): ```bash helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ --set webhook.enabled=true \ - --set webhook.serviceName=karpenter \ - --set webhook.serviceNamespace="${KARPENTER_NAMESPACE}" \ + --set webhook.serviceName="karpenter" \ --set webhook.port=8443 ``` @@ -131,7 +145,24 @@ If you receive a `label validation error` or `annotation validation error` consu If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. {{% /alert %}} -11. Upgrade Karpenter to the new version. At the end of this step, conversion webhooks run to convert the Karpenter CRDs to v1. +{{% alert title="Note" color="warning" %}} + +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: + +```bash +export SERVICE_NAME= +export SERVICE_NAMESPACE= +export SERVICE_PORT= +# NodePools +kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# NodeClaims +kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# EC2NodeClass +kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +``` +{{% /alert %}} + +10. Upgrade Karpenter to the new version. At the end of this step, conversion webhooks run to convert the Karpenter CRDs to v1. ```bash # Service account annotion can be dropped when using pod identity @@ -150,9 +181,9 @@ If you receive a `label validation error` or `annotation validation error` consu Karpenter has deprecated and moved a number of Helm values as part of the v1 release. Ensure that you upgrade to the newer version of these helm values during your migration to v1. You can find detail for all the settings that were moved in the [v1 Upgrade Reference]({{}}). {{% /alert %}} -12. Once upgraded, you won't need to roll your nodes to be compatible with v1.1.0, except if you have multiple NodePools with different `kubelet`s that are referencing the same EC2NodeClass. Karpenter has moved the `kubelet` to the EC2NodeClass in v1. NodePools with different `kubelet` referencing the same EC2NodeClass will be compatible with v1.0.0, but will not be in v1.1.0. +11. Once upgraded, you won't need to roll your nodes to be compatible with v1.1, except if you have multiple NodePools with different `kubelet`s that are referencing the same EC2NodeClass. Karpenter has moved the `kubelet` to the EC2NodeClass in v1. NodePools with different `kubelet` referencing the same EC2NodeClass will be compatible with v1.0, but will not be in v1.1. -When you have completed the migration to `1.0.0` CRDs, Karpenter will be able to serve both the `v1beta1` versions and the `v1` versions of NodePools, NodeClaims, and EC2NodeClasses. +When you have completed the migration to `1.0` CRDs, Karpenter will be able to serve both the `v1beta1` versions and the `v1` versions of NodePools, NodeClaims, and EC2NodeClasses. The results of upgrading these CRDs include the following: * The storage version of these resources change to v1. After the upgrade, Karpenter starts converting these resources to v1 storage versions in real time. Users should experience no differences from this change. @@ -176,16 +207,16 @@ kubectl apply -f ec2nodeclass.yaml ## Changelog Refer to the [Full Changelog]({{}}) for more. -Because Karpenter `v1.0.0` will run both `v1` and `v1beta1` versions of NodePools and EC2NodeClasses, you don't immediately have to upgrade the stored manifests that you have to v1. +Because Karpenter `v1.0` will run both `v1` and `v1beta1` versions of NodePools and EC2NodeClasses, you don't immediately have to upgrade the stored manifests that you have to v1. However, in preparation for later Karpenter upgrades (which will not support `v1beta1`, review the following changes from v1beta1 to v1. -Karpenter `v1.0.0` changes are divided into two different categories: those you must do before `1.0.0` upgrades and those you must do before `1.1.0` upgrades. +Karpenter `v1.0` changes are divided into two different categories: those you must do before `1.0` upgrades and those you must do before `1.1` upgrades. -### Changes required before upgrading to `v1.0.0` +### Changes required before upgrading to `v1.0` Apply the following changes to your NodePools and EC2NodeClasses, as appropriate, before upgrading them to v1. -* **Deprecated annotations, labels and tags are removed for v1.0.0**: For v1, `karpenter.sh/do-not-consolidate` (annotation), `karpenter.sh/do-not-evict +* **Deprecated annotations, labels and tags are removed for v1.0**: For v1, `karpenter.sh/do-not-consolidate` (annotation), `karpenter.sh/do-not-evict (annotation)`, and `karpenter.sh/managed-by` (tag) all have support removed. The `karpenter.sh/managed-by`, which currently stores the cluster name in its value, is replaced by `eks:eks-cluster-name`, to allow for [EKS Pod Identity ABAC policies](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-abac.html). @@ -196,14 +227,14 @@ for [EKS Pod Identity ABAC policies](https://docs.aws.amazon.com/eks/latest/user * **Ubuntu AMIFamily Removed**: - Support for automatic AMI selection and UserData generation for Ubuntu has been dropped with Karpenter `v1.0.0`. + Support for automatic AMI selection and UserData generation for Ubuntu has been dropped with Karpenter `v1.0`. To continue using Ubuntu AMIs you will need to specify an AMI using `amiSelectorTerms`. UserData generation can be achieved using the AL2 AMIFamily which has an identical UserData format. However, compatibility is not guaranteed long-term and changes to either AL2 or Ubuntu's UserData format may introduce incompatibilities. If this occurs, the Custom AMIFamily should be used for Ubuntu and UserData will need to be entirely maintained by the user. - If you are upgrading to `v1.0.0` and already have v1beta1 Ubuntu EC2NodeClasses, all you need to do is specify `amiSelectorTerms` and Karpenter will translate your NodeClasses to the v1 equivalent (as shown below). + If you are upgrading to `v1.0` and already have v1beta1 Ubuntu EC2NodeClasses, all you need to do is specify `amiSelectorTerms` and Karpenter will translate your NodeClasses to the v1 equivalent (as shown below). Failure to specify `amiSelectorTerms` will result in the EC2NodeClass and all referencing NodePools to show as NotReady, causing Karpenter to ignore these NodePools and EC2NodeClasses for Provisioning and Drift. ```yaml @@ -238,14 +269,14 @@ for [EKS Pod Identity ABAC policies](https://docs.aws.amazon.com/eks/latest/user * A Custom amiFamily. You must ensure that the node you add the `karpenter.sh/unregistered:NoExecute` taint in your UserData. * An Ubuntu AMI, as described earlier. -### Before upgrading to `v1.1.0` +### Before upgrading to `v1.1` -Apply the following changes to your NodePools and EC2NodeClasses, as appropriate, before upgrading them to `v1.1.0` (though okay to make these changes for `1.0.0`) +Apply the following changes to your NodePools and EC2NodeClasses, as appropriate, before upgrading them to `v1.1` (though okay to make these changes for `1.0`) -* **v1beta1 support gone**: In `v1.1.0`, v1beta1 is not supported. So you need to: +* **v1beta1 support gone**: In `v1.1`, v1beta1 is not supported. So you need to: * Migrate all Karpenter yaml files [NodePools]({{}}), [EC2NodeClasses]({{}}) to v1. * Know that all resources in the cluster also need to be on v1. It's possible (although unlikely) that some resources still may be stored as v1beta1 in ETCD if no writes had been made to them since the v1 upgrade. You could use a tool such as [kube-storage-version-migrator](https://github.com/kubernetes-sigs/kube-storage-version-migrator) to handle this. - * Know that you cannot rollback to v1beta1 once you have upgraded to `v1.1.0`. + * Know that you cannot rollback to v1beta1 once you have upgraded to `v1.1`. * **Kubelet Configuration**: If you have multiple NodePools pointing to the same EC2NodeClass that have different kubeletConfigurations, then you have to manually add more EC2NodeClasses and point their NodePools to them. This will induce drift and you will have to roll your cluster. @@ -260,11 +291,11 @@ Keep in mind that rollback, without replacing the Karpenter nodes, will not be s Once the Karpenter CRDs are upgraded to v1, conversion webhooks are needed to help convert APIs that are stored in etcd from v1 to v1beta1. Also changes to the CRDs will need to at least include the latest version of the CRD in this case being v1. The patch versions of the v1beta1 Karpenter controller that include the conversion wehooks include: -* v0.37.1 -* v0.36.3 -* v0.35.6 -* v0.34.7 -* v0.33.6 +* v0.37.2 +* v0.36.4 +* v0.35.7 +* v0.34.8 +* v0.33.7 {{% alert title="Note" color="warning" %}} When rolling back from v1, Karpenter will not retain data that was only valid in v1 APIs. For instance, if you were upgrading from v0.33.5 to v1, updated the `NodePool.Spec.Disruption.Budgets` field and then rolled back to v0.33.6, Karpenter would not retain the `NodePool.Spec.Disruption.Budgets` field, as that was introduced in v0.34.x. If you are configuring the kubelet field, and have removed the `compatibility.karpenter.sh/v1beta1-kubelet-conversion` annotation, rollback is not supported without replacing your nodes between EC2NodeClass and NodePool. @@ -304,7 +335,7 @@ echo "${KARPENTER_NAMESPACE}" "${KARPENTER_VERSION}" "${CLUSTER_NAME}" "${TEMPOU 3. Rollback the Karpenter Policy -**v0.33.6 and v0.34.7:** +**v0.33 and v0.34:** ```bash export TEMPOUT=$(mktemp) curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > ${TEMPOUT} \ @@ -332,10 +363,26 @@ curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARP helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ --set webhook.enabled=true \ --set webhook.serviceName=karpenter \ - --set webhook.serviceNamespace="${KARPENTER_NAMESPACE}" \ --set webhook.port=8443 ``` +{{% alert title="Note" color="warning" %}} + +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: + +```bash +export SERVICE_NAME= +export SERVICE_NAMESPACE= +export SERVICE_PORT= +# NodePools +kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# NodeClaims +kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# EC2NodeClass +kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +``` +{{% /alert %}} + 5. Rollback the Karpenter Controller ```bash diff --git a/website/content/en/preview/concepts/nodeclasses.md b/website/content/en/preview/concepts/nodeclasses.md index 0c3e1a1d76e5..cee750f1ac0a 100644 --- a/website/content/en/preview/concepts/nodeclasses.md +++ b/website/content/en/preview/concepts/nodeclasses.md @@ -399,7 +399,7 @@ AMIFamily does not impact which AMI is discovered, only the UserData generation {{% alert title="Ubuntu Support Dropped at v1" color="warning" %}} -Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.0`. +Support for the Ubuntu AMIFamily has been dropped at Karpenter `v1.0.1`. This means Karpenter no longer supports automatic AMI discovery and UserData generation for Ubuntu. To continue using Ubuntu AMIs, you will need to select Ubuntu AMIs using `amiSelectorTerms`. diff --git a/website/content/en/preview/upgrading/upgrade-guide.md b/website/content/en/preview/upgrading/upgrade-guide.md index a4dcaf3ad1c1..3259045631dd 100644 --- a/website/content/en/preview/upgrading/upgrade-guide.md +++ b/website/content/en/preview/upgrading/upgrade-guide.md @@ -11,7 +11,7 @@ Use your existing upgrade mechanisms to upgrade your core add-ons in Kubernetes This guide contains information needed to upgrade to the latest release of Karpenter, along with compatibility issues you need to be aware of when upgrading from earlier Karpenter versions. {{% alert title="Warning" color="warning" %}} -With the release of Karpenter v1.0.0, the Karpenter team will be dropping support for karpenter versions v0.32 and below. We recommend upgrading to the latest version of Karpenter and keeping Karpenter up-to-date for bug fixes and new features. +With the release of Karpenter v1.0.1, the Karpenter team will be dropping support for karpenter versions v0.32 and below. We recommend upgrading to the latest version of Karpenter and keeping Karpenter up-to-date for bug fixes and new features. {{% /alert %}} ### CRD Upgrades diff --git a/website/content/en/preview/upgrading/v1-migration.md b/website/content/en/preview/upgrading/v1-migration.md index b21274d35172..968842af6a09 100644 --- a/website/content/en/preview/upgrading/v1-migration.md +++ b/website/content/en/preview/upgrading/v1-migration.md @@ -9,15 +9,14 @@ description: > This migration guide is designed to help you migrate Karpenter from v1beta1 APIs to v1 (v0.33-v0.37). Use this document as a reference to the changes that were introduced in this release and as a guide to how you need to update the manifests and other Karpenter objects you created in previous Karpenter releases. -Before you begin upgrading to `v1.0.0`, you should know that: +Before you begin upgrading to `v1.0`, you should know that: -* Every Karpenter upgrade from pre-v1.0.0 versions must upgrade to minor version `v1.0.0`. -* You must be upgrading to `v1.0.0` from a version of Karpenter that only supports v1beta1 APIs, e.g. NodePools, NodeClaims, and NodeClasses (v0.33+). -* Karpenter `v1.0.0`+ supports Karpenter v1 and v1beta1 APIs and will not work with earlier Provisioner, AWSNodeTemplate or Machine v1alpha1 APIs. Do not upgrade to `v1.0.0`+ without first [upgrading to `0.32.x`]({{}}) or later and then upgrading to v0.33. -* Version `v1.0.0` adds [conversion webhooks](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion) to automatically pull the v1 API version of previously applied v1beta1 NodePools, EC2NodeClasses, and NodeClaims. Karpenter will stop serving the v1beta1 API version at v1.1.0 and will drop the conversion webhooks at that time. You will need to migrate all stored manifests to v1 API versions on Karpenter v1.0+. Keep in mind that this is a conversion and not dual support, which means that resources are updated in-place rather than migrated over from the previous version. +* Every Karpenter upgrade from pre-v1.0 versions must upgrade to minor version `v1.0`. +* You must be upgrading to `v1.0` from a version of Karpenter that only supports v1beta1 APIs, e.g. NodePools, NodeClaims, and NodeClasses (v0.33+). +* Karpenter `v1.0`+ supports Karpenter v1 and v1beta1 APIs and will not work with earlier Provisioner, AWSNodeTemplate or Machine v1alpha1 APIs. Do not upgrade to `v1.0`+ without first [upgrading to `0.32.x`]({{}}) or later and then upgrading to v0.33. +* Version `v1.0` adds [conversion webhooks](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion) to automatically pull the v1 API version of previously applied v1beta1 NodePools, EC2NodeClasses, and NodeClaims. Karpenter will stop serving the v1beta1 API version at v1.1 and will drop the conversion webhooks at that time. You will need to migrate all stored manifests to v1 API versions on Karpenter v1.0+. Keep in mind that this is a conversion and not dual support, which means that resources are updated in-place rather than migrated over from the previous version. * If you need to rollback the upgrade to v1, you need to upgrade to a special patch version of the minor version you came from. For instance, if you came from v0.33.5, you'll need to downgrade back to v0.33.6. More details on how to do this in [Downgrading]({{}}). * Validate that you are running at least Kubernetes 1.25. Use the [compatibility matrix]({{}}) to confirm you are on a supported Kubernetes version. -* Karpenter runs a helm post-install-hook as part of upgrading to and from v1.0.0. If you're running Karpenter on a non x86_64 node, you'll need to update your `values.postInstallHook.image` values in your helm `values.yaml` file to point to a compatible image with kubectl. For instance, [an ARM compatible version](https://hub.docker.com/layers/bitnami/kubectl/1.30/images/sha256-d63c6609dd5c336fd036bd303fd4ce5f272e73ddd1923d32c12d62b7149067ed?context=explore). See the [Changelog]({{}}) for details about actions you should take before upgrading to v1.0 or v1.1. @@ -26,7 +25,7 @@ See the [Changelog]({{}}) for details about actions you shoul Please read through the entire procedure before beginning the upgrade. There are major changes in this upgrade, so please evaluate the list of breaking changes before continuing. {{% alert title="Note" color="warning" %}} -The upgrade guide will first require upgrading to your latest patch version prior to upgrade to v1.0.0. This will be to allow the conversion webhooks to operate and minimize downtime of the Karpenter controller when requesting the Karpenter custom resources. +The upgrade guide will first require upgrading to your latest patch version prior to upgrade to v1.0. This will be to allow the conversion webhooks to operate and minimize downtime of the Karpenter controller when requesting the Karpenter custom resources. {{% /alert %}} 1. Set environment variables for your cluster to upgrade to the latest patch version of the current Karpenter version you're running on: @@ -53,34 +52,50 @@ The upgrade guide will first require upgrading to your latest patch version prio The Karpenter version you are running must be between minor version `v0.33` and `v0.37`. To be able to roll back from Karpenter v1, you must rollback to on the following patch release versions for your minor version, which will include the conversion webhooks for a smooth rollback: - * v0.37.1 - * v0.36.3 - * v0.35.6 - * v0.34.7 - * v0.33.6 + * v0.37.2 + * v0.36.4 + * v0.35.7 + * v0.34.8 + * v0.33.7 3. Review for breaking changes between v0.33 and v0.37: If you are already running Karpenter v0.37.x, you can skip this step. If you are running an earlier Karpenter version, you need to review the [Upgrade Guide]({{}}) for each minor release. -4. Set environment variables for upgrading to the latest patch version. Note that `v0.33.6` and `v0.34.7` both need to include the v prefix, whereas `v0.35+` should not. +4. Set environment variables for upgrading to the latest patch version. Note that `v0.33` and `v0.34` both need to include the v prefix, whereas `v0.35+` should not. - ```bash - export KARPENTER_VERSION= - ``` + ```bash + export KARPENTER_VERSION= + ``` -6. Apply the latest patch version of your current minor version's Custom Resource Definitions (CRDs): +5. Apply the latest patch version of your current minor version's Custom Resource Definitions (CRDs): ```bash helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ --set webhook.enabled=true \ - --set webhook.serviceName=karpenter \ - --set webhook.serviceNamespace="${KARPENTER_NAMESPACE}" \ + --set webhook.serviceName="karpenter" \ --set webhook.port=8443 ``` {{% alert title="Note" color="warning" %}} If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. {{% /alert %}} -7. Upgrade Karpenter to the latest patch version of your current minor version's. At the end of this step, conversion webhooks will run but will not convert any version. +{{% alert title="Note" color="warning" %}} + +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: + +```bash +export SERVICE_NAME= +export SERVICE_NAMESPACE= +export SERVICE_PORT= +# NodePools +kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# NodeClaims +kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# EC2NodeClass +kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +``` +{{% /alert %}} + +6. Upgrade Karpenter to the latest patch version of your current minor version's. At the end of this step, conversion webhooks will run but will not convert any version. ```bash # Service account annotation can be dropped when using pod identity @@ -97,14 +112,14 @@ If you receive a `label validation error` or `annotation validation error` consu --wait ``` -8. Set environment variables for first upgrading to v1.0.0 +7. Set environment variables for first upgrading to v1.0.1 ```bash - export KARPENTER_VERSION=1.0.0 + export KARPENTER_VERSION=1.0.1 ``` -9. Update your existing policy using the following to the v1.0.0 controller policy: +8. Update your existing policy using the following to the v1.0.1 controller policy: Notable Changes to the IAM Policy include additional tag-scoping for the `eks:eks-cluster-name` tag for instances and instance profiles. ```bash @@ -117,13 +132,12 @@ If you receive a `label validation error` or `annotation validation error` consu --parameter-overrides "ClusterName=${CLUSTER_NAME}" ``` -10. Apply the v1.0.0 Custom Resource Definitions (CRDs): +9. Apply the v1.0.1 Custom Resource Definitions (CRDs): ```bash helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ --set webhook.enabled=true \ - --set webhook.serviceName=karpenter \ - --set webhook.serviceNamespace="${KARPENTER_NAMESPACE}" \ + --set webhook.serviceName="karpenter" \ --set webhook.port=8443 ``` @@ -131,7 +145,25 @@ If you receive a `label validation error` or `annotation validation error` consu If you receive a `label validation error` or `annotation validation error` consult the [troubleshooting guide]({{}}) for steps to resolve. {{% /alert %}} -11. Upgrade Karpenter to the new version. At the end of this step, conversion webhooks run to convert the Karpenter CRDs to v1. +{{% alert title="Note" color="warning" %}} + +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: + +```bash +```bash +export SERVICE_NAME= +export SERVICE_NAMESPACE= +export SERVICE_PORT= +# NodePools +kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# NodeClaims +kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# EC2NodeClass +kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +``` +{{% /alert %}} + +10. Upgrade Karpenter to the new version. At the end of this step, conversion webhooks run to convert the Karpenter CRDs to v1. ```bash # Service account annotion can be dropped when using pod identity @@ -150,9 +182,9 @@ If you receive a `label validation error` or `annotation validation error` consu Karpenter has deprecated and moved a number of Helm values as part of the v1 release. Ensure that you upgrade to the newer version of these helm values during your migration to v1. You can find detail for all the settings that were moved in the [v1 Upgrade Reference]({{}}). {{% /alert %}} -12. Once upgraded, you won't need to roll your nodes to be compatible with v1.1.0, except if you have multiple NodePools with different `kubelet`s that are referencing the same EC2NodeClass. Karpenter has moved the `kubelet` to the EC2NodeClass in v1. NodePools with different `kubelet` referencing the same EC2NodeClass will be compatible with v1.0.0, but will not be in v1.1.0. +11. Once upgraded, you won't need to roll your nodes to be compatible with v1.1, except if you have multiple NodePools with different `kubelet`s that are referencing the same EC2NodeClass. Karpenter has moved the `kubelet` to the EC2NodeClass in v1. NodePools with different `kubelet` referencing the same EC2NodeClass will be compatible with v1.0, but will not be in v1.1. -When you have completed the migration to `1.0.0` CRDs, Karpenter will be able to serve both the `v1beta1` versions and the `v1` versions of NodePools, NodeClaims, and EC2NodeClasses. +When you have completed the migration to `1.0` CRDs, Karpenter will be able to serve both the `v1beta1` versions and the `v1` versions of NodePools, NodeClaims, and EC2NodeClasses. The results of upgrading these CRDs include the following: * The storage version of these resources change to v1. After the upgrade, Karpenter starts converting these resources to v1 storage versions in real time. Users should experience no differences from this change. @@ -176,16 +208,16 @@ kubectl apply -f ec2nodeclass.yaml ## Changelog Refer to the [Full Changelog]({{}}) for more. -Because Karpenter `v1.0.0` will run both `v1` and `v1beta1` versions of NodePools and EC2NodeClasses, you don't immediately have to upgrade the stored manifests that you have to v1. +Because Karpenter `v1.0` will run both `v1` and `v1beta1` versions of NodePools and EC2NodeClasses, you don't immediately have to upgrade the stored manifests that you have to v1. However, in preparation for later Karpenter upgrades (which will not support `v1beta1`, review the following changes from v1beta1 to v1. -Karpenter `v1.0.0` changes are divided into two different categories: those you must do before `1.0.0` upgrades and those you must do before `1.1.0` upgrades. +Karpenter `v1.0` changes are divided into two different categories: those you must do before `1.0` upgrades and those you must do before `1.1` upgrades. -### Changes required before upgrading to `v1.0.0` +### Changes required before upgrading to `v1.0` Apply the following changes to your NodePools and EC2NodeClasses, as appropriate, before upgrading them to v1. -* **Deprecated annotations, labels and tags are removed for v1.0.0**: For v1, `karpenter.sh/do-not-consolidate` (annotation), `karpenter.sh/do-not-evict +* **Deprecated annotations, labels and tags are removed for v1.0**: For v1, `karpenter.sh/do-not-consolidate` (annotation), `karpenter.sh/do-not-evict (annotation)`, and `karpenter.sh/managed-by` (tag) all have support removed. The `karpenter.sh/managed-by`, which currently stores the cluster name in its value, is replaced by `eks:eks-cluster-name`, to allow for [EKS Pod Identity ABAC policies](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-abac.html). @@ -196,14 +228,14 @@ for [EKS Pod Identity ABAC policies](https://docs.aws.amazon.com/eks/latest/user * **Ubuntu AMIFamily Removed**: - Support for automatic AMI selection and UserData generation for Ubuntu has been dropped with Karpenter `v1.0.0`. + Support for automatic AMI selection and UserData generation for Ubuntu has been dropped with Karpenter `v1.0`. To continue using Ubuntu AMIs you will need to specify an AMI using `amiSelectorTerms`. UserData generation can be achieved using the AL2 AMIFamily which has an identical UserData format. However, compatibility is not guaranteed long-term and changes to either AL2 or Ubuntu's UserData format may introduce incompatibilities. If this occurs, the Custom AMIFamily should be used for Ubuntu and UserData will need to be entirely maintained by the user. - If you are upgrading to `v1.0.0` and already have v1beta1 Ubuntu EC2NodeClasses, all you need to do is specify `amiSelectorTerms` and Karpenter will translate your NodeClasses to the v1 equivalent (as shown below). + If you are upgrading to `v1.0` and already have v1beta1 Ubuntu EC2NodeClasses, all you need to do is specify `amiSelectorTerms` and Karpenter will translate your NodeClasses to the v1 equivalent (as shown below). Failure to specify `amiSelectorTerms` will result in the EC2NodeClass and all referencing NodePools to show as NotReady, causing Karpenter to ignore these NodePools and EC2NodeClasses for Provisioning and Drift. ```yaml @@ -238,14 +270,14 @@ for [EKS Pod Identity ABAC policies](https://docs.aws.amazon.com/eks/latest/user * A Custom amiFamily. You must ensure that the node you add the `karpenter.sh/unregistered:NoExecute` taint in your UserData. * An Ubuntu AMI, as described earlier. -### Before upgrading to `v1.1.0` +### Before upgrading to `v1.1` -Apply the following changes to your NodePools and EC2NodeClasses, as appropriate, before upgrading them to `v1.1.0` (though okay to make these changes for `1.0.0`) +Apply the following changes to your NodePools and EC2NodeClasses, as appropriate, before upgrading them to `v1.1` (though okay to make these changes for `1.0`) -* **v1beta1 support gone**: In `v1.1.0`, v1beta1 is not supported. So you need to: +* **v1beta1 support gone**: In `v1.1`, v1beta1 is not supported. So you need to: * Migrate all Karpenter yaml files [NodePools]({{}}), [EC2NodeClasses]({{}}) to v1. * Know that all resources in the cluster also need to be on v1. It's possible (although unlikely) that some resources still may be stored as v1beta1 in ETCD if no writes had been made to them since the v1 upgrade. You could use a tool such as [kube-storage-version-migrator](https://github.com/kubernetes-sigs/kube-storage-version-migrator) to handle this. - * Know that you cannot rollback to v1beta1 once you have upgraded to `v1.1.0`. + * Know that you cannot rollback to v1beta1 once you have upgraded to `v1.1`. * **Kubelet Configuration**: If you have multiple NodePools pointing to the same EC2NodeClass that have different kubeletConfigurations, then you have to manually add more EC2NodeClasses and point their NodePools to them. This will induce drift and you will have to roll your cluster. @@ -260,11 +292,11 @@ Keep in mind that rollback, without replacing the Karpenter nodes, will not be s Once the Karpenter CRDs are upgraded to v1, conversion webhooks are needed to help convert APIs that are stored in etcd from v1 to v1beta1. Also changes to the CRDs will need to at least include the latest version of the CRD in this case being v1. The patch versions of the v1beta1 Karpenter controller that include the conversion wehooks include: -* v0.37.1 -* v0.36.3 -* v0.35.6 -* v0.34.7 -* v0.33.6 +* v0.37.2 +* v0.36.4 +* v0.35.7 +* v0.34.8 +* v0.33.7 {{% alert title="Note" color="warning" %}} When rolling back from v1, Karpenter will not retain data that was only valid in v1 APIs. For instance, if you were upgrading from v0.33.5 to v1, updated the `NodePool.Spec.Disruption.Budgets` field and then rolled back to v0.33.6, Karpenter would not retain the `NodePool.Spec.Disruption.Budgets` field, as that was introduced in v0.34.x. If you are configuring the kubelet field, and have removed the `compatibility.karpenter.sh/v1beta1-kubelet-conversion` annotation, rollback is not supported without replacing your nodes between EC2NodeClass and NodePool. @@ -304,7 +336,7 @@ echo "${KARPENTER_NAMESPACE}" "${KARPENTER_VERSION}" "${CLUSTER_NAME}" "${TEMPOU 3. Rollback the Karpenter Policy -**v0.33.6 and v0.34.7:** +**v0.33 and v0.34:** ```bash export TEMPOUT=$(mktemp) curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > ${TEMPOUT} \ @@ -332,10 +364,26 @@ curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARP helm upgrade --install karpenter-crd oci://public.ecr.aws/karpenter/karpenter-crd --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \ --set webhook.enabled=true \ --set webhook.serviceName=karpenter \ - --set webhook.serviceNamespace="${KARPENTER_NAMESPACE}" \ --set webhook.port=8443 ``` +{{% alert title="Note" color="warning" %}} + +As an alternative approach to updating the Karpenter CRDs conversion webhook configuration, you can patch the CRDs as follows: + +```bash +export SERVICE_NAME= +export SERVICE_NAMESPACE= +export SERVICE_PORT= +# NodePools +kubectl patch customresourcedefinitions nodepools.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# NodeClaims +kubectl patch customresourcedefinitions nodeclaims.karpenter.sh -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +# EC2NodeClass +kubectl patch customresourcedefinitions ec2nodeclasses.karpenter.k8s.aws -p "{\"spec\":{\"conversion\":{\"webhook\":{\"clientConfig\":{\"service\": {\"name\": \"${SERVICE_NAME}\", \"namespace\": \"${SERVICE_NAMESPACE}\", \"port\":${SERVICE_PORT}}}}}}}" +``` +{{% /alert %}} + 5. Rollback the Karpenter Controller ```bash diff --git a/website/content/en/v0.36/concepts/nodeclasses.md b/website/content/en/v0.36/concepts/nodeclasses.md index 566bd3a5a38e..3af9b17f38fe 100644 --- a/website/content/en/v0.36/concepts/nodeclasses.md +++ b/website/content/en/v0.36/concepts/nodeclasses.md @@ -160,7 +160,7 @@ status: # Generated instance profile name from "role" instanceProfile: "${CLUSTER_NAME}-0123456778901234567789" ``` -Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v0.36.0/examples/v1beta1/). +Refer to the [NodePool docs]({{}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v0.36.4/examples/v1beta1/). ## spec.amiFamily @@ -749,7 +749,7 @@ spec: chown -R ec2-user ~ec2-user/.ssh ``` -For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v0.36.0/examples/v1beta1/). +For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v0.36.4/examples/v1beta1/). Karpenter will merge the userData you specify with the default userData for that AMIFamily. See the [AMIFamily]({{< ref "#specamifamily" >}}) section for more details on these defaults. View the sections below to understand the different merge strategies for each AMIFamily. diff --git a/website/content/en/v0.36/concepts/nodepools.md b/website/content/en/v0.36/concepts/nodepools.md index c8e77e588f10..4c357b8aef6d 100644 --- a/website/content/en/v0.36/concepts/nodepools.md +++ b/website/content/en/v0.36/concepts/nodepools.md @@ -22,7 +22,7 @@ Here are things you should know about NodePools: * If Karpenter encounters a startup taint in the NodePool it will be applied to nodes that are provisioned, but pods do not need to tolerate the taint. Karpenter assumes that the taint is temporary and some other system will remove the taint. * It is recommended to create NodePools that are mutually exclusive. So no Pod should match multiple NodePools. If multiple NodePools are matched, Karpenter will use the NodePool with the highest [weight](#specweight). -For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v0.36.0/examples/v1beta1/). +For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v0.36.4/examples/v1beta1/). ```yaml apiVersion: karpenter.sh/v1beta1 @@ -72,7 +72,7 @@ spec: operator: In values: ["c", "m", "r"] # minValues here enforces the scheduler to consider at least that number of unique instance-category to schedule the pods. - # This field is ALPHA and can be dropped or replaced at any time + # This field is ALPHA and can be dropped or replaced at any time minValues: 2 - key: "karpenter.k8s.aws/instance-family" operator: In diff --git a/website/content/en/v0.36/faq.md b/website/content/en/v0.36/faq.md index 1391de1dc43c..f8b5533dde7c 100644 --- a/website/content/en/v0.36/faq.md +++ b/website/content/en/v0.36/faq.md @@ -14,7 +14,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well. ### Can I write my own cloud provider for Karpenter? -Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.36.2/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. +Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.36.4/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. ### What operating system nodes does Karpenter deploy? Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}). @@ -26,7 +26,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}). ### What RBAC access is required? -All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/role.yaml) files for details. +All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.36.4/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.36.4/charts/karpenter/templates/role.yaml) files for details. ### Can I run Karpenter outside of a Kubernetes cluster? Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API. diff --git a/website/content/en/v0.36/getting-started/getting-started-with-karpenter/_index.md b/website/content/en/v0.36/getting-started/getting-started-with-karpenter/_index.md index b4f5f5fcf72e..82ae722ce5fb 100644 --- a/website/content/en/v0.36/getting-started/getting-started-with-karpenter/_index.md +++ b/website/content/en/v0.36/getting-started/getting-started-with-karpenter/_index.md @@ -45,7 +45,7 @@ After setting up the tools, set the Karpenter and Kubernetes version: ```bash export KARPENTER_NAMESPACE="kube-system" -export KARPENTER_VERSION="0.36.2" +export KARPENTER_VERSION="0.36.4" export K8S_VERSION="1.29" ``` @@ -112,13 +112,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/ As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command. ```bash -cosign verify public.ecr.aws/karpenter/karpenter:0.36.2 \ +cosign verify public.ecr.aws/karpenter/karpenter:0.36.4 \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ --certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \ --certificate-github-workflow-repository=aws/karpenter-provider-aws \ --certificate-github-workflow-name=Release \ - --certificate-github-workflow-ref=refs/tags/v0.36.2 \ - --annotations version=0.36.2 + --certificate-github-workflow-ref=refs/tags/v0.36.4 \ + --annotations version=0.36.4 ``` {{% alert title="DNS Policy Notice" color="warning" %}} diff --git a/website/content/en/v0.36/getting-started/migrating-from-cas/_index.md b/website/content/en/v0.36/getting-started/migrating-from-cas/_index.md index 5b9e07ea0f1f..75160dab6732 100644 --- a/website/content/en/v0.36/getting-started/migrating-from-cas/_index.md +++ b/website/content/en/v0.36/getting-started/migrating-from-cas/_index.md @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group. First set the Karpenter release you want to deploy. ```bash -export KARPENTER_VERSION="0.36.2" +export KARPENTER_VERSION="0.36.4" ``` We can now generate a full Karpenter deployment yaml from the Helm chart. @@ -132,7 +132,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t ## Create default NodePool -We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v0.36.2/examples/v1beta1) for specific needs. +We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v0.36.4/examples/v1beta1) for specific needs. {{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step10-create-nodepool.sh" language="bash" %}} diff --git a/website/content/en/v0.36/reference/cloudformation.md b/website/content/en/v0.36/reference/cloudformation.md index a93c285fe9ac..9360120438c5 100644 --- a/website/content/en/v0.36/reference/cloudformation.md +++ b/website/content/en/v0.36/reference/cloudformation.md @@ -17,7 +17,7 @@ These descriptions should allow you to understand: To download a particular version of `cloudformation.yaml`, set the version and use `curl` to pull the file to your local system: ```bash -export KARPENTER_VERSION="0.36.2" +export KARPENTER_VERSION="0.36.4" curl https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > cloudformation.yaml ``` diff --git a/website/content/en/v0.36/reference/threat-model.md b/website/content/en/v0.36/reference/threat-model.md index 84a4fefb1cef..a5a741248f95 100644 --- a/website/content/en/v0.36/reference/threat-model.md +++ b/website/content/en/v0.36/reference/threat-model.md @@ -31,11 +31,11 @@ A Cluster Developer has the ability to create pods via `Deployments`, `ReplicaSe Karpenter has permissions to create and manage cloud instances. Karpenter has Kubernetes API permissions to create, update, and remove nodes, as well as evict pods. For a full list of the permissions, see the RBAC rules in the helm chart template. Karpenter also has AWS IAM permissions to create instances with IAM roles. -* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/aggregate-clusterrole.yaml) -* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/clusterrole-core.yaml) -* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/clusterrole.yaml) -* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/rolebinding.yaml) -* [role.yaml](https://github.com/aws/karpenter/blob/v0.36.2/charts/karpenter/templates/role.yaml) +* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.36.4/charts/karpenter/templates/aggregate-clusterrole.yaml) +* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.36.4/charts/karpenter/templates/clusterrole-core.yaml) +* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.36.4/charts/karpenter/templates/clusterrole.yaml) +* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.36.4/charts/karpenter/templates/rolebinding.yaml) +* [role.yaml](https://github.com/aws/karpenter/blob/v0.36.4/charts/karpenter/templates/role.yaml) ## Assumptions diff --git a/website/content/en/v0.36/upgrading/upgrade-guide.md b/website/content/en/v0.36/upgrading/upgrade-guide.md index c3e52cab6fac..3a3905dec00c 100644 --- a/website/content/en/v0.36/upgrading/upgrade-guide.md +++ b/website/content/en/v0.36/upgrading/upgrade-guide.md @@ -28,9 +28,9 @@ If you get the error `invalid ownership metadata; label validation error:` while In general, you can reapply the CRDs in the `crds` directory of the Karpenter Helm chart: ```shell -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.2/pkg/apis/crds/karpenter.sh_nodepools.yaml -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.2/pkg/apis/crds/karpenter.sh_nodeclaims.yaml -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.2/pkg/apis/crds/karpenter.k8s.aws_ec2nodeclasses.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.4/pkg/apis/crds/karpenter.sh_nodepools.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.4/pkg/apis/crds/karpenter.sh_nodeclaims.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.36.4/pkg/apis/crds/karpenter.k8s.aws_ec2nodeclasses.yaml ```