diff --git a/website/config.toml b/website/config.toml
index f66721968da7..2e693f73e244 100644
--- a/website/config.toml
+++ b/website/config.toml
@@ -136,9 +136,10 @@ pre = ""
url = "https://karpenter.sh/docs/"
[[params.versions]]
- version = "v0.4.3"
+ version = "v0.5.0"
url = "https://karpenter.sh/docs/"
[[params.versions]]
- version = "Pre-release"
- url = "https://karpenter.sh/pre-docs/"
+ version = "v0.4.3"
+ url = "https://karpenter.sh/v0.4.3-docs/"
+
diff --git a/website/content/en/docs/cloud-providers/AWS/_index.md b/website/content/en/docs/AWS/_index.md
similarity index 100%
rename from website/content/en/docs/cloud-providers/AWS/_index.md
rename to website/content/en/docs/AWS/_index.md
diff --git a/website/content/en/pre-docs/AWS/constraints.md b/website/content/en/docs/AWS/constraints.md
similarity index 100%
rename from website/content/en/pre-docs/AWS/constraints.md
rename to website/content/en/docs/AWS/constraints.md
diff --git a/website/content/en/docs/cloud-providers/AWS/launch-templates.md b/website/content/en/docs/AWS/launch-templates.md
similarity index 100%
rename from website/content/en/docs/cloud-providers/AWS/launch-templates.md
rename to website/content/en/docs/AWS/launch-templates.md
diff --git a/website/content/en/docs/concepts/_index.md b/website/content/en/docs/concepts/_index.md
index db3cf7fb652b..b3ea40f728b2 100644
--- a/website/content/en/docs/concepts/_index.md
+++ b/website/content/en/docs/concepts/_index.md
@@ -42,7 +42,7 @@ Here are some things to know about the Karpenter provisioner:
* **Provisioner CR**: Karpenter defines a Custom Resource called a Provisioner to specify provisioning configuration.
Each provisioner manages a distinct set of nodes, but pods can be scheduled to any provisioner that supports its scheduling constraints.
A provisioner contains constraints that impact the nodes that can be provisioned and attributes of those nodes (such timers for removing nodes).
-See [Provisioner](/docs/provisioner-crd/) for a description of settings and the [Provisioning](/docs/tasks/provisioner.md) task for of provisioner examples.
+See [Provisioner API](/docs/provisioner-crd/) for a description of settings and the [Provisioning](../tasks/provisioning-task) task for provisioner examples.
* **Well-known labels**: The provisioner can use well-known Kubernetes labels to allow pods to request only certain instance types, architectures, operating systems, or other attributes when creating nodes.
See [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) for details.
@@ -67,16 +67,13 @@ Karpenter handles all clean-up work needed to properly delete the node.
* **Empty nodes**: When the last workload pod running on a Karpenter-managed node is gone, the node is annotated with an emptiness timestamp.
Once that "node empty" time-to-live (`ttlSecondsAfterEmpty`) is reached, finalization is triggered.
-For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](/docs/tasks/delete-nodes.md) for details.
+For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](../tasks/deprov-nodes.md) for details.
### Upgrading nodes
A straight-forward way to upgrade nodes is to set `ttlSecondsUntilExpired`.
Nodes will be terminated after a set period of time and will be replaced with newer nodes.
-For details on upgrading nodes with Karpenter, see [Upgrading nodes with Karpenter](/docs/tasks/upgrade-nodes.md) for details.
-
-
Understanding the following concepts will help you in carrying out the tasks just described.
### Constraints
@@ -109,7 +106,7 @@ So, for example, to include a certain instance type, you could use the Kubernete
### Kubernetes cluster autoscaler
Like Karpenter, [Kubernetes Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) is
designed to add nodes when requests come in to run pods that cannot be met by current capacity.
-Cluster autoscaler is part of the Kubenetes project, with implementations by most major Kubernetes cloud providers.
+Cluster autoscaler is part of the Kubernetes project, with implementations by most major Kubernetes cloud providers.
By taking a fresh look at provisioning, Karpenter offers the following improvements:
* **Designed to handle the full flexibility of the cloud**:
@@ -134,11 +131,11 @@ Karpenter's job is to efficiently assess and choose compute assets based on requ
These can include basic Kubernetes features or features that are specific to the cloud provider (such as AWS).
Layered *constraints* are applied when a pod makes requests for compute resources that cannot be met by current capacity.
-A pod can specify `nodeAffinity` (to run in a particular zone or instance type) or a `topologySpreadConstraints` spread (to cause a set of pods be balanced across multiple nodes).
+A pod can specify `nodeAffinity` (to run in a particular zone or instance type) or a `topologySpreadConstraints` spread (to cause a set of pods to be balanced across multiple nodes).
The pod can specify a `nodeSelector` to run only on nodes with a particular label and `resource.requests` to ensure that the node has enough available memory.
The Kubernetes scheduler tries to match those constraints with available nodes.
-If the pod is unschedulable, Karpenter created compute resources that match its needs.
+If the pod is unschedulable, Karpenter creates compute resources that match its needs.
When Karpenter tries to provision a node, it analyzes scheduling constraints before choosing the node to create.
As long as the requests are not outside of the provisioner's constraints,
@@ -147,12 +144,12 @@ Note that if the constraints are such that a match is not possible, the pod will
So, what constraints can you use as an application developer deploying pods that could be managed by Karpenter?
-Kubernetes features that Karpenters supports for scheduling nodes include node affinity based on [persistant volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#node-affinity) and [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector).
+Kubernetes features that Karpenters supports for scheduling nodes include nodeAffinity and [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector).
It also supports [PodDisruptionBudget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) and [topologySpreadConstraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
From the Kubernetes [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) page,
you can see a full list of Kubernetes labels, annotations and taints that determine scheduling.
-Only a small set of them are implemented in Karpenter, including:
+Those that are implemented in Karpenter include:
* **kubernetes.io/arch**: For example, kubernetes.io/arch=amd64
* **node.kubernetes.io/instance-type**: For example, node.kubernetes.io/instance-type=m3.medium
@@ -164,4 +161,4 @@ Kubernetes SIG scalability recommends against these features and Karpenter doesn
Instead, the Karpenter project recommends `topologySpreadConstraints` to reduce blast radius and `nodeSelectors` and `taints` to implement colocation.
{{% /alert %}}
-For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](/docs/tasks/running-pods.md) for details.
+For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](../tasks/running-pods.md) for details.
diff --git a/website/content/en/docs/getting-started/_index.md b/website/content/en/docs/getting-started/_index.md
index a6adf1fa4168..9038ba3bd6b0 100644
--- a/website/content/en/docs/getting-started/_index.md
+++ b/website/content/en/docs/getting-started/_index.md
@@ -3,9 +3,6 @@
title: "Getting Started with Karpenter on AWS"
linkTitle: "Getting Started"
weight: 10
-menu:
- main:
- weight: 10
---
Karpenter automatically provisions new nodes in response to unschedulable
@@ -157,10 +154,9 @@ eksctl. Thus, we don't need the helm chart to do that.
helm repo add karpenter https://charts.karpenter.sh
helm repo update
helm upgrade --install karpenter karpenter/karpenter --namespace karpenter \
- --create-namespace --set serviceAccount.create=false --version 0.4.3 \
+ --create-namespace --set serviceAccount.create=false --version 0.5.0 \
--set controller.clusterName=${CLUSTER_NAME} \
--set controller.clusterEndpoint=$(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.endpoint" --output json) \
- --set defaultProvisioner.create=false \
--wait # for the defaulting webhook to install before creating a Provisioner
```
@@ -169,40 +165,6 @@ helm upgrade --install karpenter karpenter/karpenter --namespace karpenter \
kubectl patch configmap config-logging -n karpenter --patch '{"data":{"loglevel.controller":"debug"}}'
```
-### Create Grafana dashboards (optional)
-
-The Karpenter repo contains multiple [importable dashboards](https://github.com/aws/karpenter/tree/main/grafana-dashboards) for an existing Grafana instance. See the Grafana documentation for [instructions](https://grafana.com/docs/grafana/latest/dashboards/export-import/#import-dashboard) to import a dashboard.
-
-#### Deploy a temporary Prometheus and Grafana stack (optional)
-
-The following commands will deploy a Prometheus and Grafana stack that is suitable for this guide but does not include persistent storage or other configurations that would be necessary for monitoring a production deployment of Karpenter.
-
-```sh
-helm repo add grafana-charts https://grafana.github.io/helm-charts
-helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-helm repo update
-
-kubectl create namespace monitoring
-
-curl -fsSL https://karpenter.sh/docs/getting-started/prometheus-values.yaml
-helm install --namespace monitoring prometheus prometheus-community/prometheus --values prometheus-values.yaml
-
-curl -fsSL https://karpenter.sh/docs/getting-started/grafana-values.yaml
-helm install --namespace monitoring grafana grafana-charts/grafana --values grafana-values.yaml
-```
-
-The Grafana instance may be accessed using port forwarding.
-
-```sh
-kubectl port-forward --namespace monitoring svc/grafana 3000:80
-```
-
-The new stack has only one user, `admin`, and the password is stored in a secret. The following command will retrieve the password.
-
-```sh
-kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode
-```
-
### Provisioner
A single Karpenter provisioner is capable of handling many different pod
@@ -220,6 +182,8 @@ This behavior can be disabled by leaving the value undefined.
Review the [provisioner CRD](/docs/provisioner-crd) for more information. For example,
`ttlSecondsUntilExpired` configures Karpenter to terminate nodes when a maximum age is reached.
+Note: This provisioner will create capacity as long as the sum of all created capacity is less than the specified limit.
+
```bash
cat <
+ Provisioner API reference page
---
## Example Provisioner Resource
@@ -33,21 +35,113 @@ spec:
# These requirements are combined with pod.spec.affinity.nodeAffinity rules.
# Operators { In, NotIn } are supported to enable including or excluding values
requirements:
- - key: "node.kubernetes.io/instance-type" # If not included, all instance types are considered
+ - key: "node.kubernetes.io/instance-type"
operator: In
values: ["m5.large", "m5.2xlarge"]
- - key: "topology.kubernetes.io/zone" # If not included, all zones are considered
+ - key: "topology.kubernetes.io/zone"
operator: In
values: ["us-west-2a", "us-west-2b"]
- - key: "kubernetes.io/arch" # If not included, all architectures are considered
+ - key: "kubernetes.io/arch"
operator: In
values: ["arm64", "amd64"]
- - key: "kubernetes.io/os" # If not included, all operating systems are considered
- operator: In
- values: ["linux"]
- key: "karpenter.sh/capacity-type" # If not included, the webhook for the AWS cloud provider will default to on-demand
operator: In
values: ["spot", "on-demand"]
# These fields vary per cloud provider, see your cloud provider specific documentation
provider: {}
```
+
+## spec.requirements
+
+Kubernetes defines the following [Well-Known Labels](https://kubernetes.io/docs/reference/labels-annotations-taints/), and cloud providers (e.g., AWS) implement them. They are defined at the "spec.requirements" section of the Provisioner API.
+
+These well known labels may be specified at the provisioner level, or in a workload definition (e.g., nodeSelector on a pod.spec). Nodes are chosen using the both the provisioner's and pod's requirements. If there is no overlap, nodes will not be launched. In other words, a pod's requirements must be within the provisioner's requirements. If a requirement is not defined for a well known label, any value available to the cloud provider may be chosen.
+
+For example, an instance type may be specified using a nodeSelector in a pod spec. If the instance type requested is not included in the provisioner list and the provisioner has instance type requirements, Karpenter will not create a node or schedule the pod.
+
+📝 None of these values are required.
+
+### Instance Types
+
+- key: `node.kubernetes.io/instance-type`
+
+Generally, instance types should be a list and not a single value. Leaving this field undefined is recommended, as it maximizes choices for efficiently placing pods.
+
+☁️ **AWS**
+
+Review [AWS instance types](https://aws.amazon.com/ec2/instance-types/).
+
+The default value includes all instance types with the exclusion of metal
+(non-virtualized),
+[non-HVM](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html),
+and GPU instances.
+
+View the full list of instance types with `aws ec2 describe-instance-types`.
+
+**Example**
+
+*Set Default with provisioner.yaml*
+
+```yaml
+spec:
+ requirements:
+ - key: node.kubernetes.io/instance-type
+ operator: In
+ values: ["m5.large", "m5.2xlarge"]
+```
+
+*Override with workload manifest (e.g., pod)*
+
+```yaml
+spec:
+ template:
+ spec:
+ nodeSelector:
+ node.kubernetes.io/instance-type: m5.large
+```
+
+### Availability Zones
+
+- key: `topology.kubernetes.io/zone`
+- value example: `us-east-1c`
+
+☁️ **AWS**
+
+- value list: `aws ec2 describe-availability-zones --region `
+
+Karpenter can be configured to create nodes in a particular zone. Note that the Availability Zone `us-east-1a` for your AWS account might not have the same location as `us-east-1a` for another AWS account.
+
+[Learn more about Availability Zone
+IDs.](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html)
+
+### Architecture
+
+- key: `kubernetes.io/arch`
+- values
+ - `amd64` (default)
+ - `arm64`
+
+Karpenter supports `amd64` nodes, and `arm64` nodes.
+
+
+### Capacity Type
+
+- key: `karpenter.sh/capacity-type`
+
+☁️ **AWS**
+
+- values
+ - `spot` (default)
+ - `on-demand`
+
+Karpenter supports specifying capacity type, which is analogous to [EC2 purchase options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html).
+
+
+## spec.provider
+
+This section is cloud provider specific. Reference the appropriate documentation:
+
+- [AWS](../AWS/constraints.md)
+
+
+
diff --git a/website/content/en/pre-docs/reinvent.md b/website/content/en/docs/reinvent.md
similarity index 100%
rename from website/content/en/pre-docs/reinvent.md
rename to website/content/en/docs/reinvent.md
diff --git a/website/content/en/pre-docs/tasks/_index.md b/website/content/en/docs/tasks/_index.md
similarity index 100%
rename from website/content/en/pre-docs/tasks/_index.md
rename to website/content/en/docs/tasks/_index.md
diff --git a/website/content/en/pre-docs/tasks/deprov-nodes.md b/website/content/en/docs/tasks/deprov-nodes.md
similarity index 100%
rename from website/content/en/pre-docs/tasks/deprov-nodes.md
rename to website/content/en/docs/tasks/deprov-nodes.md
diff --git a/website/content/en/pre-docs/tasks/provisioning-task.md b/website/content/en/docs/tasks/provisioning-task.md
similarity index 86%
rename from website/content/en/pre-docs/tasks/provisioning-task.md
rename to website/content/en/docs/tasks/provisioning-task.md
index d2670463d7ac..e71e2f9b6f50 100644
--- a/website/content/en/pre-docs/tasks/provisioning-task.md
+++ b/website/content/en/docs/tasks/provisioning-task.md
@@ -24,14 +24,14 @@ If you want to modify or add provisioners to Karpenter, do the following:
1. Review the following Provisioner documents:
- * [Provisioner](../getting-started/#provisioner) in the Getting Started guide for a sample default Provisioner
- * [Provisioner API](../provisioner-crd) for descriptions of Provisioner API values
- * [Provisioning Configuration](../AWS/constraints) for cloud-specific settings
+ * [Provisioner](../../getting-started/#provisioner) in the Getting Started guide for a sample default Provisioner
+ * [Provisioner API](../../provisioner-crd) for descriptions of Provisioner API values
+ * [Provisioning Configuration](../../AWS/constraints) for cloud-specific settings
2. Apply the new or modified Provisioner to the cluster.
The following examples illustrate different aspects of Provisioners.
-Refer to [Running pods](running-pods) to see how the same features are used in Pod specs to determine where pods run.
+Refer to [Running pods](../running-pods) to see how the same features are used in Pod specs to determine where pods run.
## Example: Requirements
diff --git a/website/content/en/pre-docs/tasks/running-pods.md b/website/content/en/docs/tasks/running-pods.md
similarity index 99%
rename from website/content/en/pre-docs/tasks/running-pods.md
rename to website/content/en/docs/tasks/running-pods.md
index d348e42c6812..d17ba1f606e9 100755
--- a/website/content/en/pre-docs/tasks/running-pods.md
+++ b/website/content/en/docs/tasks/running-pods.md
@@ -60,7 +60,8 @@ Its limits are set to 256MiB of memory and 1 CPU.
Instance type selection math only uses `requests`, but `limits` may be configured to enable resource oversubscription.
-See [Managing Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) for details on resource types supported by Kubernetes, [Specify a memory request and a memory limit](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit) for examples of memory requests, and [Provisioning COnfiguration](../aws/constraints) for a list of supported resources.
+See [Managing Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) for details on resource types supported by Kubernetes, [Specify a memory request and a memory limit](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit) for examples of memory requests, and [Provisioning Configuration](../../aws/constraints) for a list of supported resources.
+
## Selecting nodes (`nodeSelector` and `nodeAffinity`)
diff --git a/website/content/en/pre-docs/provisioner-crd.md b/website/content/en/pre-docs/provisioner-crd.md
deleted file mode 100644
index dc22262559d3..000000000000
--- a/website/content/en/pre-docs/provisioner-crd.md
+++ /dev/null
@@ -1,147 +0,0 @@
----
-title: "Provisioner API"
-linkTitle: "Provisioner API"
-weight: 70
-date: 2017-01-05
-description: >
- Provisioner API reference page
----
-
-## Example Provisioner Resource
-
-```yaml
-apiVersion: karpenter.sh/v1alpha5
-kind: Provisioner
-metadata:
- name: default
-spec:
- # If nil, the feature is disabled, nodes will never expire
- ttlSecondsUntilExpired: 2592000 # 30 Days = 60 * 60 * 24 * 30 Seconds;
-
- # If nil, the feature is disabled, nodes will never scale down due to low utilization
- ttlSecondsAfterEmpty: 30
-
- # Provisioned nodes will have these taints
- # Taints may prevent pods from scheduling if they are not tolerated
- taints:
- - key: example.com/special-taint
- effect: NoSchedule
-
- # Labels are arbitrary key-values that are applied to all nodes
- labels:
- billing-team: my-team
-
- # Requirements that constrain the parameters of provisioned nodes.
- # These requirements are combined with pod.spec.affinity.nodeAffinity rules.
- # Operators { In, NotIn } are supported to enable including or excluding values
- requirements:
- - key: "node.kubernetes.io/instance-type"
- operator: In
- values: ["m5.large", "m5.2xlarge"]
- - key: "topology.kubernetes.io/zone"
- operator: In
- values: ["us-west-2a", "us-west-2b"]
- - key: "kubernetes.io/arch"
- operator: In
- values: ["arm64", "amd64"]
- - key: "karpenter.sh/capacity-type" # If not included, the webhook for the AWS cloud provider will default to on-demand
- operator: In
- values: ["spot", "on-demand"]
- # These fields vary per cloud provider, see your cloud provider specific documentation
- provider: {}
-```
-
-## spec.requirements
-
-Kubernetes defines the following [Well-Known Labels](https://kubernetes.io/docs/reference/labels-annotations-taints/), and cloud providers (e.g., AWS) implement them. They are defined at the "spec.requirements" section of the Provisioner API.
-
-These well known labels may be specified at the provisioner level, or in a workload definition (e.g., nodeSelector on a pod.spec). Nodes are chosen using the both the provisioner's and pod's requirements. If there is no overlap, nodes will not be launched. In other words, a pod's requirements must be within the provisioner's requirements. If a requirement is not defined for a well known label, any value available to the cloud provider may be chosen.
-
-For example, an instance type may be specified using a nodeSelector in a pod spec. If the instance type requested is not included in the provisioner list and the provisioner has instance type requirements, Karpenter will not create a node or schedule the pod.
-
-📝 None of these values are required.
-
-### Instance Types
-
-- key: `node.kubernetes.io/instance-type`
-
-Generally, instance types should be a list and not a single value. Leaving this field undefined is recommended, as it maximizes choices for efficiently placing pods.
-
-☁️ **AWS**
-
-Review [AWS instance types](https://aws.amazon.com/ec2/instance-types/).
-
-The default value includes all instance types with the exclusion of metal
-(non-virtualized),
-[non-HVM](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html),
-and GPU instances.
-
-View the full list of instance types with `aws ec2 describe-instance-types`.
-
-**Example**
-
-*Set Default with provisioner.yaml*
-
-```yaml
-spec:
- requirements:
- - key: node.kubernetes.io/instance-type
- operator: In
- values: ["m5.large", "m5.2xlarge"]
-```
-
-*Override with workload manifest (e.g., pod)*
-
-```yaml
-spec:
- template:
- spec:
- nodeSelector:
- node.kubernetes.io/instance-type: m5.large
-```
-
-### Availability Zones
-
-- key: `topology.kubernetes.io/zone`
-- value example: `us-east-1c`
-
-☁️ **AWS**
-
-- value list: `aws ec2 describe-availability-zones --region `
-
-Karpenter can be configured to create nodes in a particular zone. Note that the Availability Zone `us-east-1a` for your AWS account might not have the same location as `us-east-1a` for another AWS account.
-
-[Learn more about Availability Zone
-IDs.](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html)
-
-### Architecture
-
-- key: `kubernetes.io/arch`
-- values
- - `amd64` (default)
- - `arm64`
-
-Karpenter supports `amd64` nodes, and `arm64` nodes.
-
-
-### Capacity Type
-
-- key: `karpenter.sh/capacity-type`
-
-☁️ **AWS**
-
-- values
- - `spot` (default)
- - `on-demand`
-
-Karpenter supports specifying capacity type, which is analogous to [EC2 purchase options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html).
-
-
-## spec.provider
-
-This section is cloud provider specific. Reference the appropriate documentation:
-
-- [AWS](../AWS/constraints.md)
-
-
-
diff --git a/website/content/en/pre-docs/_index.md b/website/content/en/v0.4.3-docs/_index.md
similarity index 99%
rename from website/content/en/pre-docs/_index.md
rename to website/content/en/v0.4.3-docs/_index.md
index de7d21868fa9..ef402efac143 100755
--- a/website/content/en/pre-docs/_index.md
+++ b/website/content/en/v0.4.3-docs/_index.md
@@ -4,7 +4,8 @@ title: "Documentation"
linkTitle: "Docs"
weight: 20
cascade:
- type: "docs"
+ type: docs
+
---
Karpenter is an open-source node provisioning project built for Kubernetes.
Adding Karpenter to a Kubernetes cluster can dramatically improve the efficiency and cost of running workloads on that cluster.
diff --git a/website/content/en/pre-docs/AWS/_index.md b/website/content/en/v0.4.3-docs/cloud-providers/AWS/_index.md
similarity index 100%
rename from website/content/en/pre-docs/AWS/_index.md
rename to website/content/en/v0.4.3-docs/cloud-providers/AWS/_index.md
diff --git a/website/content/en/docs/cloud-providers/AWS/aws-spec-fields.md b/website/content/en/v0.4.3-docs/cloud-providers/AWS/aws-spec-fields.md
similarity index 100%
rename from website/content/en/docs/cloud-providers/AWS/aws-spec-fields.md
rename to website/content/en/v0.4.3-docs/cloud-providers/AWS/aws-spec-fields.md
diff --git a/website/content/en/pre-docs/AWS/launch-templates.md b/website/content/en/v0.4.3-docs/cloud-providers/AWS/launch-templates.md
similarity index 100%
rename from website/content/en/pre-docs/AWS/launch-templates.md
rename to website/content/en/v0.4.3-docs/cloud-providers/AWS/launch-templates.md
diff --git a/website/content/en/docs/cloud-providers/_index.md b/website/content/en/v0.4.3-docs/cloud-providers/_index.md
similarity index 100%
rename from website/content/en/docs/cloud-providers/_index.md
rename to website/content/en/v0.4.3-docs/cloud-providers/_index.md
diff --git a/website/content/en/pre-docs/concepts/_index.md b/website/content/en/v0.4.3-docs/concepts/_index.md
similarity index 91%
rename from website/content/en/pre-docs/concepts/_index.md
rename to website/content/en/v0.4.3-docs/concepts/_index.md
index b3ea40f728b2..db3cf7fb652b 100644
--- a/website/content/en/pre-docs/concepts/_index.md
+++ b/website/content/en/v0.4.3-docs/concepts/_index.md
@@ -42,7 +42,7 @@ Here are some things to know about the Karpenter provisioner:
* **Provisioner CR**: Karpenter defines a Custom Resource called a Provisioner to specify provisioning configuration.
Each provisioner manages a distinct set of nodes, but pods can be scheduled to any provisioner that supports its scheduling constraints.
A provisioner contains constraints that impact the nodes that can be provisioned and attributes of those nodes (such timers for removing nodes).
-See [Provisioner API](/docs/provisioner-crd/) for a description of settings and the [Provisioning](../tasks/provisioning-task) task for provisioner examples.
+See [Provisioner](/docs/provisioner-crd/) for a description of settings and the [Provisioning](/docs/tasks/provisioner.md) task for of provisioner examples.
* **Well-known labels**: The provisioner can use well-known Kubernetes labels to allow pods to request only certain instance types, architectures, operating systems, or other attributes when creating nodes.
See [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) for details.
@@ -67,13 +67,16 @@ Karpenter handles all clean-up work needed to properly delete the node.
* **Empty nodes**: When the last workload pod running on a Karpenter-managed node is gone, the node is annotated with an emptiness timestamp.
Once that "node empty" time-to-live (`ttlSecondsAfterEmpty`) is reached, finalization is triggered.
-For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](../tasks/deprov-nodes.md) for details.
+For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](/docs/tasks/delete-nodes.md) for details.
### Upgrading nodes
A straight-forward way to upgrade nodes is to set `ttlSecondsUntilExpired`.
Nodes will be terminated after a set period of time and will be replaced with newer nodes.
+For details on upgrading nodes with Karpenter, see [Upgrading nodes with Karpenter](/docs/tasks/upgrade-nodes.md) for details.
+
+
Understanding the following concepts will help you in carrying out the tasks just described.
### Constraints
@@ -106,7 +109,7 @@ So, for example, to include a certain instance type, you could use the Kubernete
### Kubernetes cluster autoscaler
Like Karpenter, [Kubernetes Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) is
designed to add nodes when requests come in to run pods that cannot be met by current capacity.
-Cluster autoscaler is part of the Kubernetes project, with implementations by most major Kubernetes cloud providers.
+Cluster autoscaler is part of the Kubenetes project, with implementations by most major Kubernetes cloud providers.
By taking a fresh look at provisioning, Karpenter offers the following improvements:
* **Designed to handle the full flexibility of the cloud**:
@@ -131,11 +134,11 @@ Karpenter's job is to efficiently assess and choose compute assets based on requ
These can include basic Kubernetes features or features that are specific to the cloud provider (such as AWS).
Layered *constraints* are applied when a pod makes requests for compute resources that cannot be met by current capacity.
-A pod can specify `nodeAffinity` (to run in a particular zone or instance type) or a `topologySpreadConstraints` spread (to cause a set of pods to be balanced across multiple nodes).
+A pod can specify `nodeAffinity` (to run in a particular zone or instance type) or a `topologySpreadConstraints` spread (to cause a set of pods be balanced across multiple nodes).
The pod can specify a `nodeSelector` to run only on nodes with a particular label and `resource.requests` to ensure that the node has enough available memory.
The Kubernetes scheduler tries to match those constraints with available nodes.
-If the pod is unschedulable, Karpenter creates compute resources that match its needs.
+If the pod is unschedulable, Karpenter created compute resources that match its needs.
When Karpenter tries to provision a node, it analyzes scheduling constraints before choosing the node to create.
As long as the requests are not outside of the provisioner's constraints,
@@ -144,12 +147,12 @@ Note that if the constraints are such that a match is not possible, the pod will
So, what constraints can you use as an application developer deploying pods that could be managed by Karpenter?
-Kubernetes features that Karpenters supports for scheduling nodes include nodeAffinity and [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector).
+Kubernetes features that Karpenters supports for scheduling nodes include node affinity based on [persistant volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#node-affinity) and [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector).
It also supports [PodDisruptionBudget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) and [topologySpreadConstraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
From the Kubernetes [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) page,
you can see a full list of Kubernetes labels, annotations and taints that determine scheduling.
-Those that are implemented in Karpenter include:
+Only a small set of them are implemented in Karpenter, including:
* **kubernetes.io/arch**: For example, kubernetes.io/arch=amd64
* **node.kubernetes.io/instance-type**: For example, node.kubernetes.io/instance-type=m3.medium
@@ -161,4 +164,4 @@ Kubernetes SIG scalability recommends against these features and Karpenter doesn
Instead, the Karpenter project recommends `topologySpreadConstraints` to reduce blast radius and `nodeSelectors` and `taints` to implement colocation.
{{% /alert %}}
-For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](../tasks/running-pods.md) for details.
+For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](/docs/tasks/running-pods.md) for details.
diff --git a/website/content/en/pre-docs/development-guide.md b/website/content/en/v0.4.3-docs/development-guide.md
similarity index 100%
rename from website/content/en/pre-docs/development-guide.md
rename to website/content/en/v0.4.3-docs/development-guide.md
diff --git a/website/content/en/docs/faqs.md b/website/content/en/v0.4.3-docs/faqs.md
similarity index 100%
rename from website/content/en/docs/faqs.md
rename to website/content/en/v0.4.3-docs/faqs.md
diff --git a/website/content/en/pre-docs/getting-started/_index.md b/website/content/en/v0.4.3-docs/getting-started/_index.md
similarity index 85%
rename from website/content/en/pre-docs/getting-started/_index.md
rename to website/content/en/v0.4.3-docs/getting-started/_index.md
index 9038ba3bd6b0..a6adf1fa4168 100644
--- a/website/content/en/pre-docs/getting-started/_index.md
+++ b/website/content/en/v0.4.3-docs/getting-started/_index.md
@@ -3,6 +3,9 @@
title: "Getting Started with Karpenter on AWS"
linkTitle: "Getting Started"
weight: 10
+menu:
+ main:
+ weight: 10
---
Karpenter automatically provisions new nodes in response to unschedulable
@@ -154,9 +157,10 @@ eksctl. Thus, we don't need the helm chart to do that.
helm repo add karpenter https://charts.karpenter.sh
helm repo update
helm upgrade --install karpenter karpenter/karpenter --namespace karpenter \
- --create-namespace --set serviceAccount.create=false --version 0.5.0 \
+ --create-namespace --set serviceAccount.create=false --version 0.4.3 \
--set controller.clusterName=${CLUSTER_NAME} \
--set controller.clusterEndpoint=$(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.endpoint" --output json) \
+ --set defaultProvisioner.create=false \
--wait # for the defaulting webhook to install before creating a Provisioner
```
@@ -165,6 +169,40 @@ helm upgrade --install karpenter karpenter/karpenter --namespace karpenter \
kubectl patch configmap config-logging -n karpenter --patch '{"data":{"loglevel.controller":"debug"}}'
```
+### Create Grafana dashboards (optional)
+
+The Karpenter repo contains multiple [importable dashboards](https://github.com/aws/karpenter/tree/main/grafana-dashboards) for an existing Grafana instance. See the Grafana documentation for [instructions](https://grafana.com/docs/grafana/latest/dashboards/export-import/#import-dashboard) to import a dashboard.
+
+#### Deploy a temporary Prometheus and Grafana stack (optional)
+
+The following commands will deploy a Prometheus and Grafana stack that is suitable for this guide but does not include persistent storage or other configurations that would be necessary for monitoring a production deployment of Karpenter.
+
+```sh
+helm repo add grafana-charts https://grafana.github.io/helm-charts
+helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
+helm repo update
+
+kubectl create namespace monitoring
+
+curl -fsSL https://karpenter.sh/docs/getting-started/prometheus-values.yaml
+helm install --namespace monitoring prometheus prometheus-community/prometheus --values prometheus-values.yaml
+
+curl -fsSL https://karpenter.sh/docs/getting-started/grafana-values.yaml
+helm install --namespace monitoring grafana grafana-charts/grafana --values grafana-values.yaml
+```
+
+The Grafana instance may be accessed using port forwarding.
+
+```sh
+kubectl port-forward --namespace monitoring svc/grafana 3000:80
+```
+
+The new stack has only one user, `admin`, and the password is stored in a secret. The following command will retrieve the password.
+
+```sh
+kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode
+```
+
### Provisioner
A single Karpenter provisioner is capable of handling many different pod
@@ -182,8 +220,6 @@ This behavior can be disabled by leaving the value undefined.
Review the [provisioner CRD](/docs/provisioner-crd) for more information. For example,
`ttlSecondsUntilExpired` configures Karpenter to terminate nodes when a maximum age is reached.
-Note: This provisioner will create capacity as long as the sum of all created capacity is less than the specified limit.
-
```bash
cat <
const target = document.querySelector(".td-content > h1")
- if (Version = "pre"){
+ if (Version == "pre"){
target.parentNode.insertBefore(PrereleaseElement(), target)
}
else {