From 96f73a66233dffb59eecfc35db4260478eb8183d Mon Sep 17 00:00:00 2001 From: Geoffrey Cline Date: Tue, 18 Jan 2022 18:37:22 -0600 Subject: [PATCH] fix broken links --- .../en/v0.4.3/cloud-providers/AWS/launch-templates.md | 2 +- website/content/en/v0.4.3/concepts/_index.md | 10 +++++----- website/content/en/v0.4.3/faqs.md | 2 +- website/content/en/v0.5.0/concepts/_index.md | 4 ++-- website/content/en/v0.5.2/concepts/_index.md | 2 +- website/content/en/v0.5.3/concepts/_index.md | 2 +- 6 files changed, 11 insertions(+), 11 deletions(-) diff --git a/website/content/en/v0.4.3/cloud-providers/AWS/launch-templates.md b/website/content/en/v0.4.3/cloud-providers/AWS/launch-templates.md index aba0d1620deb..b3744bb2ba1e 100644 --- a/website/content/en/v0.4.3/cloud-providers/AWS/launch-templates.md +++ b/website/content/en/v0.4.3/cloud-providers/AWS/launch-templates.md @@ -219,7 +219,7 @@ aws cloudformation create-stack \ ### Define LaunchTemplate for Provisioner The LaunchTemplate is ready to be used. Specify it by name in the [Provisioner -CRD](../../provisioner-crd). Karpenter will use this template when creating new instances. +CRD](../../../provisioner-crd). Karpenter will use this template when creating new instances. ```yaml apiVersion: karpenter.sh/v1alpha5 diff --git a/website/content/en/v0.4.3/concepts/_index.md b/website/content/en/v0.4.3/concepts/_index.md index 38ab6e8d69ed..86f804658d67 100644 --- a/website/content/en/v0.4.3/concepts/_index.md +++ b/website/content/en/v0.4.3/concepts/_index.md @@ -24,7 +24,7 @@ Concepts associated with this role are described below. Karpenter is designed to run on a node in your Kubernetes cluster. As part of the installation process, you need credentials from the underlying cloud provider to allow nodes to be started up and added to the cluster as they are needed. -[Getting Started with Karpenter on AWS](https://karpenter.sh/docs/getting-started/) +[Getting Started with Karpenter on AWS](../getting-started/) describes the process of installing Karpenter on an AWS cloud provider. Because requests to add and delete nodes and schedule pods are made through Kubernetes, AWS IAM Roles for Service Accounts (IRSA) are needed by your Kubernetes cluster to make privileged requests to AWS. For example, Karpenter uses AWS IRSA roles to grant the permissions needed to describe EC2 instance types and create EC2 instances. @@ -42,7 +42,7 @@ Here are some things to know about the Karpenter provisioner: * **Provisioner CR**: Karpenter defines a Custom Resource called a Provisioner to specify provisioning configuration. Each provisioner manages a distinct set of nodes, but pods can be scheduled to any provisioner that supports its scheduling constraints. A provisioner contains constraints that impact the nodes that can be provisioned and attributes of those nodes (such timers for removing nodes). -See [Provisioner](../provisioner-crd/) for a description of settings and the [Provisioning](/docs/tasks/provisioning-task/) task for of provisioner examples. +See [Provisioner](../provisioner-crd/) for a description of settings and the [Provisioning](../tasks/provisioning-task/) task for of provisioner examples. * **Well-known labels**: The provisioner can use well-known Kubernetes labels to allow pods to request only certain instance types, architectures, operating systems, or other attributes when creating nodes. See [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) for details. @@ -67,14 +67,14 @@ Karpenter handles all clean-up work needed to properly delete the node. * **Empty nodes**: When the last workload pod running on a Karpenter-managed node is gone, the node is annotated with an emptiness timestamp. Once that "node empty" time-to-live (`ttlSecondsAfterEmpty`) is reached, finalization is triggered. -For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](/docs/tasks/deprov-nodes/) for details. +For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](../tasks/deprov-nodes/) for details. ### Upgrading nodes A straight-forward way to upgrade nodes is to set `ttlSecondsUntilExpired`. Nodes will be terminated after a set period of time and will be replaced with newer nodes. -For details on upgrading nodes with Karpenter, see [Upgrading nodes with Karpenter](/docs/tasks/deprov-nodes/#expiry) for details. +For details on upgrading nodes with Karpenter, see [Upgrading nodes with Karpenter](../deprov-nodes/#expiry) for details. Understanding the following concepts will help you in carrying out the tasks just described. @@ -164,4 +164,4 @@ Kubernetes SIG scalability recommends against these features and Karpenter doesn Instead, the Karpenter project recommends `topologySpreadConstraints` to reduce blast radius and `nodeSelectors` and `taints` to implement colocation. {{% /alert %}} -For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](/docs/tasks/running-pods/) for details. +For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](../tasks/running-pods/) for details. diff --git a/website/content/en/v0.4.3/faqs.md b/website/content/en/v0.4.3/faqs.md index 6f159b3445f8..32101f57f9c2 100644 --- a/website/content/en/v0.4.3/faqs.md +++ b/website/content/en/v0.4.3/faqs.md @@ -20,7 +20,7 @@ No. Provisioners work in tandem with the Kube Scheduler. When capacity is uncons ### How should I define scheduling constraints? Karpenter takes a layered approach to scheduling constraints. Karpenter comes with a set of global defaults, which may be overridden by Provisioner-level defaults. Further, these may be overridden by pod scheduling constraints. This model requires minimal configuration for most use cases, and supports diverse workloads using a single Provisioner. ### Does Karpenter support node selectors? -Yes. Node selectors are an opt-in mechanism which allow users to specify the nodes on which a pod can scheduled. Karpenter recognizes [well-known node selectors](https://kubernetes.io/docs/reference/labels-annotations-taints/) on unschedulable pods and uses them to constrain the nodes it provisions. You can read more about the well-known node selectors supported by Karpenter in the [Concepts](/docs/concepts/#well-known-labels) documentation. For example, `node.kubernetes.io/instance-type`, `topology.kubernetes.io/zone`, `kubernetes.io/os`, `kubernetes.io/arch`, `karpenter.sh/capacity-type` are supported, and will ensure that provisioned nodes are constrained accordingly. Additionally, users may specify arbitrary labels, which will be automatically applied to every node launched by the Provisioner. +Yes. Node selectors are an opt-in mechanism which allow users to specify the nodes on which a pod can scheduled. Karpenter recognizes [well-known node selectors](https://kubernetes.io/docs/reference/labels-annotations-taints/) on unschedulable pods and uses them to constrain the nodes it provisions. You can read more about the well-known node selectors supported by Karpenter in the [Concepts](../concepts/#well-known-labels) documentation. For example, `node.kubernetes.io/instance-type`, `topology.kubernetes.io/zone`, `kubernetes.io/os`, `kubernetes.io/arch`, `karpenter.sh/capacity-type` are supported, and will ensure that provisioned nodes are constrained accordingly. Additionally, users may specify arbitrary labels, which will be automatically applied to every node launched by the Provisioner. ### Does Karpenter support taints? Yes. Taints are an opt-out mechanism which allows users to specify the nodes on which a pod cannot be scheduled. Unlike node selectors, Karpenter does not automatically taint nodes in response to pod tolerations. Similar to node selectors, users may specify taints on their Provisioner, which will be automatically added to every node it provisions. This means that if a Provisioner is configured with taints, any incoming pods will not be scheduled unless the taints are tolerated. diff --git a/website/content/en/v0.5.0/concepts/_index.md b/website/content/en/v0.5.0/concepts/_index.md index 27b41f98da32..9b117d16eec0 100644 --- a/website/content/en/v0.5.0/concepts/_index.md +++ b/website/content/en/v0.5.0/concepts/_index.md @@ -24,7 +24,7 @@ Concepts associated with this role are described below. Karpenter is designed to run on a node in your Kubernetes cluster. As part of the installation process, you need credentials from the underlying cloud provider to allow nodes to be started up and added to the cluster as they are needed. -[Getting Started with Karpenter on AWS](https://karpenter.sh/docs/getting-started/) +[Getting Started with Karpenter on AWS](../getting-started/) describes the process of installing Karpenter on an AWS cloud provider. Because requests to add and delete nodes and schedule pods are made through Kubernetes, AWS IAM Roles for Service Accounts (IRSA) are needed by your Kubernetes cluster to make privileged requests to AWS. For example, Karpenter uses AWS IRSA roles to grant the permissions needed to describe EC2 instance types and create EC2 instances. @@ -42,7 +42,7 @@ Here are some things to know about the Karpenter provisioner: * **Provisioner CR**: Karpenter defines a Custom Resource called a Provisioner to specify provisioning configuration. Each provisioner manages a distinct set of nodes, but pods can be scheduled to any provisioner that supports its scheduling constraints. A provisioner contains constraints that impact the nodes that can be provisioned and attributes of those nodes (such timers for removing nodes). -See [Provisioner API](/docs/provisioner/) for a description of settings and the [Provisioning](../tasks/provisioning-task) task for provisioner examples. +See [Provisioner API](../provisioner/) for a description of settings and the [Provisioning](../tasks/provisioning-task) task for provisioner examples. * **Well-known labels**: The provisioner can use well-known Kubernetes labels to allow pods to request only certain instance types, architectures, operating systems, or other attributes when creating nodes. See [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) for details. diff --git a/website/content/en/v0.5.2/concepts/_index.md b/website/content/en/v0.5.2/concepts/_index.md index 74346a911721..9b117d16eec0 100644 --- a/website/content/en/v0.5.2/concepts/_index.md +++ b/website/content/en/v0.5.2/concepts/_index.md @@ -24,7 +24,7 @@ Concepts associated with this role are described below. Karpenter is designed to run on a node in your Kubernetes cluster. As part of the installation process, you need credentials from the underlying cloud provider to allow nodes to be started up and added to the cluster as they are needed. -[Getting Started with Karpenter on AWS](https://karpenter.sh/docs/getting-started/) +[Getting Started with Karpenter on AWS](../getting-started/) describes the process of installing Karpenter on an AWS cloud provider. Because requests to add and delete nodes and schedule pods are made through Kubernetes, AWS IAM Roles for Service Accounts (IRSA) are needed by your Kubernetes cluster to make privileged requests to AWS. For example, Karpenter uses AWS IRSA roles to grant the permissions needed to describe EC2 instance types and create EC2 instances. diff --git a/website/content/en/v0.5.3/concepts/_index.md b/website/content/en/v0.5.3/concepts/_index.md index 241791b1be31..7caaf4955413 100644 --- a/website/content/en/v0.5.3/concepts/_index.md +++ b/website/content/en/v0.5.3/concepts/_index.md @@ -24,7 +24,7 @@ Concepts associated with this role are described below. Karpenter is designed to run on a node in your Kubernetes cluster. As part of the installation process, you need credentials from the underlying cloud provider to allow nodes to be started up and added to the cluster as they are needed. -[Getting Started with Karpenter on AWS](https://karpenter.sh/docs/getting-started/) +[Getting Started with Karpenter on AWS](../getting-started/) describes the process of installing Karpenter on an AWS cloud provider. Because requests to add and delete nodes and schedule pods are made through Kubernetes, AWS IAM Roles for Service Accounts (IRSA) are needed by your Kubernetes cluster to make privileged requests to AWS. For example, Karpenter uses AWS IRSA roles to grant the permissions needed to describe EC2 instance types and create EC2 instances.