Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix broken links #1179

Merged
merged 1 commit into from
Jan 19, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ aws cloudformation create-stack \
### Define LaunchTemplate for Provisioner

The LaunchTemplate is ready to be used. Specify it by name in the [Provisioner
CRD](../../provisioner-crd). Karpenter will use this template when creating new instances.
CRD](../../../provisioner-crd). Karpenter will use this template when creating new instances.

```yaml
apiVersion: karpenter.sh/v1alpha5
Expand Down
10 changes: 5 additions & 5 deletions website/content/en/v0.4.3/concepts/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Concepts associated with this role are described below.
Karpenter is designed to run on a node in your Kubernetes cluster.
As part of the installation process, you need credentials from the underlying cloud provider to allow nodes to be started up and added to the cluster as they are needed.

[Getting Started with Karpenter on AWS](https://karpenter.sh/docs/getting-started/)
[Getting Started with Karpenter on AWS](../getting-started/)
describes the process of installing Karpenter on an AWS cloud provider.
Because requests to add and delete nodes and schedule pods are made through Kubernetes, AWS IAM Roles for Service Accounts (IRSA) are needed by your Kubernetes cluster to make privileged requests to AWS.
For example, Karpenter uses AWS IRSA roles to grant the permissions needed to describe EC2 instance types and create EC2 instances.
Expand All @@ -42,7 +42,7 @@ Here are some things to know about the Karpenter provisioner:
* **Provisioner CR**: Karpenter defines a Custom Resource called a Provisioner to specify provisioning configuration.
Each provisioner manages a distinct set of nodes, but pods can be scheduled to any provisioner that supports its scheduling constraints.
A provisioner contains constraints that impact the nodes that can be provisioned and attributes of those nodes (such timers for removing nodes).
See [Provisioner](../provisioner-crd/) for a description of settings and the [Provisioning](/docs/tasks/provisioning-task/) task for of provisioner examples.
See [Provisioner](../provisioner-crd/) for a description of settings and the [Provisioning](../tasks/provisioning-task/) task for of provisioner examples.

* **Well-known labels**: The provisioner can use well-known Kubernetes labels to allow pods to request only certain instance types, architectures, operating systems, or other attributes when creating nodes.
See [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) for details.
Expand All @@ -67,14 +67,14 @@ Karpenter handles all clean-up work needed to properly delete the node.
* **Empty nodes**: When the last workload pod running on a Karpenter-managed node is gone, the node is annotated with an emptiness timestamp.
Once that "node empty" time-to-live (`ttlSecondsAfterEmpty`) is reached, finalization is triggered.

For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](/docs/tasks/deprov-nodes/) for details.
For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](../tasks/deprov-nodes/) for details.

### Upgrading nodes

A straight-forward way to upgrade nodes is to set `ttlSecondsUntilExpired`.
Nodes will be terminated after a set period of time and will be replaced with newer nodes.

For details on upgrading nodes with Karpenter, see [Upgrading nodes with Karpenter](/docs/tasks/deprov-nodes/#expiry) for details.
For details on upgrading nodes with Karpenter, see [Upgrading nodes with Karpenter](../deprov-nodes/#expiry) for details.


Understanding the following concepts will help you in carrying out the tasks just described.
Expand Down Expand Up @@ -164,4 +164,4 @@ Kubernetes SIG scalability recommends against these features and Karpenter doesn
Instead, the Karpenter project recommends `topologySpreadConstraints` to reduce blast radius and `nodeSelectors` and `taints` to implement colocation.
{{% /alert %}}

For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](/docs/tasks/running-pods/) for details.
For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](../tasks/running-pods/) for details.
2 changes: 1 addition & 1 deletion website/content/en/v0.4.3/faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ No. Provisioners work in tandem with the Kube Scheduler. When capacity is uncons
### How should I define scheduling constraints?
Karpenter takes a layered approach to scheduling constraints. Karpenter comes with a set of global defaults, which may be overridden by Provisioner-level defaults. Further, these may be overridden by pod scheduling constraints. This model requires minimal configuration for most use cases, and supports diverse workloads using a single Provisioner.
### Does Karpenter support node selectors?
Yes. Node selectors are an opt-in mechanism which allow users to specify the nodes on which a pod can scheduled. Karpenter recognizes [well-known node selectors](https://kubernetes.io/docs/reference/labels-annotations-taints/) on unschedulable pods and uses them to constrain the nodes it provisions. You can read more about the well-known node selectors supported by Karpenter in the [Concepts](/docs/concepts/#well-known-labels) documentation. For example, `node.kubernetes.io/instance-type`, `topology.kubernetes.io/zone`, `kubernetes.io/os`, `kubernetes.io/arch`, `karpenter.sh/capacity-type` are supported, and will ensure that provisioned nodes are constrained accordingly. Additionally, users may specify arbitrary labels, which will be automatically applied to every node launched by the Provisioner.
Yes. Node selectors are an opt-in mechanism which allow users to specify the nodes on which a pod can scheduled. Karpenter recognizes [well-known node selectors](https://kubernetes.io/docs/reference/labels-annotations-taints/) on unschedulable pods and uses them to constrain the nodes it provisions. You can read more about the well-known node selectors supported by Karpenter in the [Concepts](../concepts/#well-known-labels) documentation. For example, `node.kubernetes.io/instance-type`, `topology.kubernetes.io/zone`, `kubernetes.io/os`, `kubernetes.io/arch`, `karpenter.sh/capacity-type` are supported, and will ensure that provisioned nodes are constrained accordingly. Additionally, users may specify arbitrary labels, which will be automatically applied to every node launched by the Provisioner.
<!-- todo defaults+overrides -->
### Does Karpenter support taints?
Yes. Taints are an opt-out mechanism which allows users to specify the nodes on which a pod cannot be scheduled. Unlike node selectors, Karpenter does not automatically taint nodes in response to pod tolerations. Similar to node selectors, users may specify taints on their Provisioner, which will be automatically added to every node it provisions. This means that if a Provisioner is configured with taints, any incoming pods will not be scheduled unless the taints are tolerated.
Expand Down
4 changes: 2 additions & 2 deletions website/content/en/v0.5.0/concepts/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Concepts associated with this role are described below.
Karpenter is designed to run on a node in your Kubernetes cluster.
As part of the installation process, you need credentials from the underlying cloud provider to allow nodes to be started up and added to the cluster as they are needed.

[Getting Started with Karpenter on AWS](https://karpenter.sh/docs/getting-started/)
[Getting Started with Karpenter on AWS](../getting-started/)
describes the process of installing Karpenter on an AWS cloud provider.
Because requests to add and delete nodes and schedule pods are made through Kubernetes, AWS IAM Roles for Service Accounts (IRSA) are needed by your Kubernetes cluster to make privileged requests to AWS.
For example, Karpenter uses AWS IRSA roles to grant the permissions needed to describe EC2 instance types and create EC2 instances.
Expand All @@ -42,7 +42,7 @@ Here are some things to know about the Karpenter provisioner:
* **Provisioner CR**: Karpenter defines a Custom Resource called a Provisioner to specify provisioning configuration.
Each provisioner manages a distinct set of nodes, but pods can be scheduled to any provisioner that supports its scheduling constraints.
A provisioner contains constraints that impact the nodes that can be provisioned and attributes of those nodes (such timers for removing nodes).
See [Provisioner API](/docs/provisioner/) for a description of settings and the [Provisioning](../tasks/provisioning-task) task for provisioner examples.
See [Provisioner API](../provisioner/) for a description of settings and the [Provisioning](../tasks/provisioning-task) task for provisioner examples.

* **Well-known labels**: The provisioner can use well-known Kubernetes labels to allow pods to request only certain instance types, architectures, operating systems, or other attributes when creating nodes.
See [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) for details.
Expand Down
2 changes: 1 addition & 1 deletion website/content/en/v0.5.2/concepts/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Concepts associated with this role are described below.
Karpenter is designed to run on a node in your Kubernetes cluster.
As part of the installation process, you need credentials from the underlying cloud provider to allow nodes to be started up and added to the cluster as they are needed.

[Getting Started with Karpenter on AWS](https://karpenter.sh/docs/getting-started/)
[Getting Started with Karpenter on AWS](../getting-started/)
describes the process of installing Karpenter on an AWS cloud provider.
Because requests to add and delete nodes and schedule pods are made through Kubernetes, AWS IAM Roles for Service Accounts (IRSA) are needed by your Kubernetes cluster to make privileged requests to AWS.
For example, Karpenter uses AWS IRSA roles to grant the permissions needed to describe EC2 instance types and create EC2 instances.
Expand Down
2 changes: 1 addition & 1 deletion website/content/en/v0.5.3/concepts/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Concepts associated with this role are described below.
Karpenter is designed to run on a node in your Kubernetes cluster.
As part of the installation process, you need credentials from the underlying cloud provider to allow nodes to be started up and added to the cluster as they are needed.

[Getting Started with Karpenter on AWS](https://karpenter.sh/docs/getting-started/)
[Getting Started with Karpenter on AWS](../getting-started/)
describes the process of installing Karpenter on an AWS cloud provider.
Because requests to add and delete nodes and schedule pods are made through Kubernetes, AWS IAM Roles for Service Accounts (IRSA) are needed by your Kubernetes cluster to make privileged requests to AWS.
For example, Karpenter uses AWS IRSA roles to grant the permissions needed to describe EC2 instance types and create EC2 instances.
Expand Down