Skip to content

Commit

Permalink
Merge branch 'main' into maskautherror
Browse files Browse the repository at this point in the history
  • Loading branch information
edibble21 authored Dec 18, 2024
2 parents 113e643 + 83c7d4a commit 703d2bf
Show file tree
Hide file tree
Showing 30 changed files with 102 additions and 52 deletions.
1 change: 1 addition & 0 deletions ADOPTERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ If you are open to others contacting you about your use of Karpenter on Slack, a
| Rapid7 | Using Karpenter across all of our Kubernetes infrastructure for efficient autoscaling, both in terms of speed and cost | `@arobinson`, `@Ross Kirk`, `@Ryan Williams` | [Homepage](https://www.rapid7.com/) |
| Sendcloud | Using Karpenter to scale our k8s clusters for Europe’s #1 shipping automation platform | N/A | [Homepage](https://www.sendcloud.com/) |
| Sentra | Using Karpenter to scale our EKS clusters, running our platform and workflows while maximizing cost-efficiency with minimal operational overhead | `@Roei Jacobovich` | [Homepage](https://sentra.io/) |
| SternumIOT | Using Karpenter to efficiently autoscale & manage our EKS clusters with GitOps, optimizing node lifecycle management for both performance and cost-effectiveness across our Kubernetes workloads | `@itayvolo` `@amitde69` | [SternumIOT](https://sternumiot.com/) |
| Stone Pagamentos | Using Karpenter to do smart sizing of our clusters | `@fabiano-amaral` | [Stone Pagamentos](https://www.stone.com.br/) |
| Stytch | Powering the scaling needs of Stytch's authentication and user-management APIs | `@Elijah Chanakira`, `@Ovadia Harary` | [Homepage](https://www.stytch.com/) |
| Superbexperience | Using Karpenter to scale the k8s clusters running our Guest Experience Management platform | `@Wernich Bekker` | [Homepage](https://www.superbexperience.com/) |
Expand Down
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,7 @@ verify: tidy download ## Verify code. Includes dependencies, linting, formatting
bash -c 'source ./hack/validation/requirements.sh && injectDomainRequirementRestrictions "karpenter.k8s.aws"'
bash -c 'source ./hack/validation/labels.sh && injectDomainLabelRestrictions "karpenter.k8s.aws"'
cp pkg/apis/crds/* charts/karpenter-crd/templates
hack/mutation/crd_annotations.sh
hack/github/dependabot.sh
$(foreach dir,$(MOD_DIRS),cd $(dir) && golangci-lint run $(newline))
@git diff --quiet ||\
Expand Down
4 changes: 2 additions & 2 deletions charts/karpenter-crd/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ apiVersion: v2
name: karpenter-crd
description: A Helm chart for Karpenter Custom Resource Definitions (CRDs).
type: application
version: 1.1.0
appVersion: 1.1.0
version: 1.1.1
appVersion: 1.1.1
keywords:
- cluster
- node
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
{{- with .Values.additionalAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
controller-gen.kubebuilder.io/version: v0.16.5
name: ec2nodeclasses.karpenter.k8s.aws
spec:
Expand Down
3 changes: 3 additions & 0 deletions charts/karpenter-crd/templates/karpenter.sh_nodeclaims.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
{{- with .Values.additionalAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
controller-gen.kubebuilder.io/version: v0.16.5
name: nodeclaims.karpenter.sh
spec:
Expand Down
3 changes: 3 additions & 0 deletions charts/karpenter-crd/templates/karpenter.sh_nodepools.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
{{- with .Values.additionalAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
controller-gen.kubebuilder.io/version: v0.16.5
name: nodepools.karpenter.sh
spec:
Expand Down
2 changes: 2 additions & 0 deletions charts/karpenter-crd/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# -- Additional annotations for the custom resource definitions.
additionalAnnotations: {}
4 changes: 2 additions & 2 deletions charts/karpenter/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ apiVersion: v2
name: karpenter
description: A Helm chart for Karpenter, an open-source node provisioning project built for Kubernetes.
type: application
version: 1.1.0
appVersion: 1.1.0
version: 1.1.1
appVersion: 1.1.1
keywords:
- cluster
- node
Expand Down
15 changes: 8 additions & 7 deletions charts/karpenter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

A Helm chart for Karpenter, an open-source node provisioning project built for Kubernetes.

![Version: 1.1.0](https://img.shields.io/badge/Version-1.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.1.0](https://img.shields.io/badge/AppVersion-1.1.0-informational?style=flat-square)
![Version: 1.1.1](https://img.shields.io/badge/Version-1.1.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.1.1](https://img.shields.io/badge/AppVersion-1.1.1-informational?style=flat-square)

## Documentation

Expand All @@ -15,7 +15,7 @@ You can follow the detailed installation instruction in the [documentation](http
```bash
helm upgrade --install --namespace karpenter --create-namespace \
karpenter oci://public.ecr.aws/karpenter/karpenter \
--version 1.1.0 \
--version 1.1.1 \
--set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=${KARPENTER_IAM_ROLE_ARN}" \
--set settings.clusterName=${CLUSTER_NAME} \
--set settings.interruptionQueue=${CLUSTER_NAME} \
Expand All @@ -27,13 +27,13 @@ helm upgrade --install --namespace karpenter --create-namespace \
As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command.

```shell
cosign verify public.ecr.aws/karpenter/karpenter:1.1.0 \
cosign verify public.ecr.aws/karpenter/karpenter:1.1.1 \
--certificate-oidc-issuer=https://token.actions.githubusercontent.com \
--certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \
--certificate-github-workflow-repository=aws/karpenter-provider-aws \
--certificate-github-workflow-name=Release \
--certificate-github-workflow-ref=refs/tags/v1.1.0 \
--annotations version=1.1.0
--certificate-github-workflow-ref=refs/tags/v1.1.1 \
--annotations version=1.1.1
```

## Values
Expand All @@ -49,9 +49,9 @@ cosign verify public.ecr.aws/karpenter/karpenter:1.1.0 \
| controller.envFrom | list | `[]` | |
| controller.extraVolumeMounts | list | `[]` | Additional volumeMounts for the controller pod. |
| controller.healthProbe.port | int | `8081` | The container port to use for http health probe. |
| controller.image.digest | string | `"sha256:51bca600197c7c6e6e0838549664b2c12c3f8dd4b23744ab28202ae97ca5aed1"` | SHA256 digest of the controller image. |
| controller.image.digest | string | `"sha256:fe383abf1dbc79f164d1cbcfd8edaaf7ce97a43fbd6cb70176011ff99ce57523"` | SHA256 digest of the controller image. |
| controller.image.repository | string | `"public.ecr.aws/karpenter/controller"` | Repository path to the controller image. |
| controller.image.tag | string | `"1.1.0"` | Tag of the controller image. |
| controller.image.tag | string | `"1.1.1"` | Tag of the controller image. |
| controller.metrics.port | int | `8080` | The container port to use for metrics. |
| controller.resources | object | `{}` | Resources for the controller pod. |
| controller.sidecarContainer | list | `[]` | Additional sidecarContainer config |
Expand All @@ -76,6 +76,7 @@ cosign verify public.ecr.aws/karpenter/karpenter:1.1.0 \
| priorityClassName | string | `"system-cluster-critical"` | PriorityClass name for the pod. |
| replicas | int | `2` | Number of replicas. |
| revisionHistoryLimit | int | `10` | The number of old ReplicaSets to retain to allow rollback. |
| schedulerName | string | `"default-scheduler"` | Specify which Kubernetes scheduler should dispatch the pod. |
| service.annotations | object | `{}` | Additional annotations for the Service. |
| serviceAccount.annotations | object | `{}` | Additional annotations for the ServiceAccount. |
| serviceAccount.create | bool | `true` | Specifies if a ServiceAccount should be created. |
Expand Down
2 changes: 1 addition & 1 deletion charts/karpenter/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ controller:
# -- Repository path to the controller image.
repository: public.ecr.aws/karpenter/controller
# -- Tag of the controller image.
tag: 1.1.0
tag: 1.1.1
# -- SHA256 digest of the controller image.
digest: sha256:51bca600197c7c6e6e0838549664b2c12c3f8dd4b23744ab28202ae97ca5aed1
# -- Additional environment variables for the controller pod.
Expand Down
9 changes: 9 additions & 0 deletions hack/mutation/crd_annotations.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/usr/bin/env bash

# Add additional annotations variable to the CRDS

CRDS="charts/karpenter-crd/templates/*.yaml"
for CRD in $CRDS
do
awk '{print} / annotations:/ && !n {print " {{- with .Values.additionalAnnotations }}\n {{- toYaml . | nindent 4 }}\n {{- end }}"; n++}' "$CRD" > "$CRD.new" && mv "$CRD.new" "$CRD"
done
6 changes: 5 additions & 1 deletion pkg/providers/instanceprofile/instanceprofile.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ package instanceprofile
import (
"context"
"fmt"
"strings"

"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/iam"
Expand Down Expand Up @@ -99,9 +100,12 @@ func (p *DefaultProvider) Create(ctx context.Context, m ResourceOwner) (string,
return "", fmt.Errorf("removing role %q for instance profile %q, %w", aws.ToString(instanceProfile.Roles[0].RoleName), profileName, err)
}
}
// If the role has a path, ignore the path and take the role name only since AddRoleToInstanceProfile
// does not support paths in the role name.
instanceProfileRoleName := lo.LastOr(strings.Split(m.InstanceProfileRole(), "/"), m.InstanceProfileRole())
if _, err = p.iamapi.AddRoleToInstanceProfile(ctx, &iam.AddRoleToInstanceProfileInput{
InstanceProfileName: aws.String(profileName),
RoleName: aws.String(m.InstanceProfileRole()),
RoleName: aws.String(instanceProfileRoleName),
}); err != nil {
return "", fmt.Errorf("adding role %q to instance profile %q, %w", m.InstanceProfileRole(), profileName, err)
}
Expand Down
21 changes: 21 additions & 0 deletions pkg/providers/instanceprofile/suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,11 @@ package instanceprofile_test

import (
"context"
"fmt"
"testing"

"github.com/aws/aws-sdk-go-v2/aws"

v1 "github.com/aws/karpenter-provider-aws/pkg/apis/v1"

"sigs.k8s.io/karpenter/pkg/test/v1alpha1"
Expand All @@ -35,6 +38,8 @@ import (
. "sigs.k8s.io/karpenter/pkg/utils/testing"
)

const nodeRole = "NodeRole"

var ctx context.Context
var stop context.CancelFunc
var env *coretest.Environment
Expand Down Expand Up @@ -100,4 +105,20 @@ var _ = Describe("InstanceProfileProvider", func() {
Expect(instanceProfile).ToNot(BeNil())
Expect(awsEnv.IAMAPI.InstanceProfiles[instanceProfile].Tags).To(HaveLen(0))
})
It("should support IAM roles without custom paths", func() {
nodeClass.Spec.Role = nodeRole
instanceProfile, err := awsEnv.InstanceProfileProvider.Create(ctx, &nodeClass)
Expect(err).To(BeNil())
Expect(instanceProfile).ToNot(BeNil())
Expect(awsEnv.IAMAPI.InstanceProfiles[instanceProfile].Roles).To(HaveLen(1))
Expect(aws.ToString(awsEnv.IAMAPI.InstanceProfiles[instanceProfile].Roles[0].RoleName)).To(Equal(nodeRole))
})
It("should support IAM roles with custom paths", func() {
nodeClass.Spec.Role = fmt.Sprintf("CustomPath/%s", nodeRole)
instanceProfile, err := awsEnv.InstanceProfileProvider.Create(ctx, &nodeClass)
Expect(err).To(BeNil())
Expect(instanceProfile).ToNot(BeNil())
Expect(awsEnv.IAMAPI.InstanceProfiles[instanceProfile].Roles).To(HaveLen(1))
Expect(aws.ToString(awsEnv.IAMAPI.InstanceProfiles[instanceProfile].Roles[0].RoleName)).To(Equal(nodeRole))
})
})
7 changes: 4 additions & 3 deletions website/content/en/docs/concepts/nodeclasses.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ status:
status: "True"
type: Ready
```
Refer to the [NodePool docs]({{<ref "./nodepools" >}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.1.0/examples/v1/).
Refer to the [NodePool docs]({{<ref "./nodepools" >}}) for settings applicable to all providers. To explore various `EC2NodeClass` configurations, refer to the examples provided [in the Karpenter Github repository](https://github.com/aws/karpenter/blob/v1.1.1/examples/v1/).


## spec.kubelet
Expand Down Expand Up @@ -1041,7 +1041,7 @@ spec:
chown -R ec2-user ~ec2-user/.ssh
```

For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.1.0/examples/v1).
For more examples on configuring fields for different AMI families, see the [examples here](https://github.com/aws/karpenter/blob/v1.1.1/examples/v1).

Karpenter will merge the userData you specify with the default userData for that AMIFamily. See the [AMIFamily]({{< ref "#specamifamily" >}}) section for more details on these defaults. View the sections below to understand the different merge strategies for each AMIFamily.

Expand Down Expand Up @@ -1464,7 +1464,7 @@ status:

## status.amis

[`status.amis`]({{< ref "#statusamis" >}}) contains the resolved `id`, `name`, and `requirements` of either the default AMIs for the [`spec.amiFamily`]({{< ref "#specamifamily" >}}) or the AMIs selected by the [`spec.amiSelectorTerms`]({{< ref "#specamiselectorterms" >}}) if this field is specified.
[`status.amis`]({{< ref "#statusamis" >}}) contains the resolved `id`, `name`, `requirements`, and the `deprecated` status of either the default AMIs for the [`spec.amiFamily`]({{< ref "#specamifamily" >}}) or the AMIs selected by the [`spec.amiSelectorTerms`]({{< ref "#specamiselectorterms" >}}) if this field is specified. The `deprecated` status will be shown for resolved AMIs that are deprecated.

#### Examples

Expand Down Expand Up @@ -1529,6 +1529,7 @@ status:
amis:
- id: ami-01234567890123456
name: custom-ami-amd64
deprecated: true
requirements:
- key: kubernetes.io/arch
operator: In
Expand Down
2 changes: 1 addition & 1 deletion website/content/en/docs/concepts/nodepools.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Here are things you should know about NodePools:
Objects for setting Kubelet features have been moved from the NodePool spec to the EC2NodeClasses spec, to not require other Karpenter providers to support those features.
{{% /alert %}}

For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.1.0/examples/v1/).
For some example `NodePool` configurations, see the [examples in the Karpenter GitHub repository](https://github.com/aws/karpenter/blob/v1.1.1/examples/v1/).

```yaml
apiVersion: karpenter.sh/v1
Expand Down
4 changes: 2 additions & 2 deletions website/content/en/docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for
AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well.

### Can I write my own cloud provider for Karpenter?
Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.1.0/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too.
Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v1.1.1/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too.

### What operating system nodes does Karpenter deploy?
Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}).
Expand All @@ -29,7 +29,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref
Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}).

### What RBAC access is required?
All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.1.0/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.1.0/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.1.0/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.1.0/charts/karpenter/templates/role.yaml) files for details.
All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.1.1/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.1.1/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.1.1/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.1.1/charts/karpenter/templates/role.yaml) files for details.

### Can I run Karpenter outside of a Kubernetes cluster?
Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ After setting up the tools, set the Karpenter and Kubernetes version:

```bash
export KARPENTER_NAMESPACE="kube-system"
export KARPENTER_VERSION="1.1.0"
export KARPENTER_VERSION="1.1.1"
export K8S_VERSION="1.31"
```

Expand Down Expand Up @@ -115,13 +115,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/
As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command.

```bash
cosign verify public.ecr.aws/karpenter/karpenter:1.1.0 \
cosign verify public.ecr.aws/karpenter/karpenter:1.1.1 \
--certificate-oidc-issuer=https://token.actions.githubusercontent.com \
--certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \
--certificate-github-workflow-repository=aws/karpenter-provider-aws \
--certificate-github-workflow-name=Release \
--certificate-github-workflow-ref=refs/tags/v1.1.0 \
--annotations version=1.1.0
--certificate-github-workflow-ref=refs/tags/v1.1.1 \
--annotations version=1.1.1
```

{{% alert title="DNS Policy Notice" color="warning" %}}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
TEMPOUT="$(mktemp)"

curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v1.1.0/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > "${TEMPOUT}" \
curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v1.1.1/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > "${TEMPOUT}" \
&& aws cloudformation deploy \
--stack-name "Karpenter-${CLUSTER_NAME}" \
--template-file "${TEMPOUT}" \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group.
First set the Karpenter release you want to deploy.
```bash
export KARPENTER_VERSION="1.1.0"
export KARPENTER_VERSION="1.1.1"
```

We can now generate a full Karpenter deployment yaml from the Helm chart.
Expand Down Expand Up @@ -132,7 +132,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t
## Create default NodePool
We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.1.0/examples/v1) for specific needs.
We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.1.1/examples/v1) for specific needs.
{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step10-create-nodepool.sh" language="bash" %}}
Expand Down
2 changes: 1 addition & 1 deletion website/content/en/docs/reference/cloudformation.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ These descriptions should allow you to understand:
To download a particular version of `cloudformation.yaml`, set the version and use `curl` to pull the file to your local system:

```bash
export KARPENTER_VERSION="1.1.0"
export KARPENTER_VERSION="1.1.1"
curl https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > cloudformation.yaml
```

Expand Down
Loading

0 comments on commit 703d2bf

Please sign in to comment.