diff --git a/website/content/en/docs/AWS/constraints.md b/website/content/en/docs/AWS/constraints.md index 24e32a12b815..767fff6149ee 100644 --- a/website/content/en/docs/AWS/constraints.md +++ b/website/content/en/docs/AWS/constraints.md @@ -39,8 +39,7 @@ spec: Karpenter discovers subnets using [AWS tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html). -Subnets may be specified by any AWS tag, including `Name`. Selecting tag values using wildcards ("*") is supported. - +Subnets may be specified by any AWS tag, including `Name`. Selecting tag values using wildcards ("\*") is supported. When launching nodes, Karpenter automatically chooses a subnet that matches the desired zone. If multiple subnets exist for a zone, one is chosen randomly. diff --git a/website/content/en/docs/concepts/_index.md b/website/content/en/docs/concepts/_index.md index 81f10c131cb1..4b9fb3e59e3e 100644 --- a/website/content/en/docs/concepts/_index.md +++ b/website/content/en/docs/concepts/_index.md @@ -67,7 +67,7 @@ Karpenter handles all clean-up work needed to properly delete the node. * **Empty nodes**: When the last workload pod running on a Karpenter-managed node is gone, the node is annotated with an emptiness timestamp. Once that "node empty" time-to-live (`ttlSecondsAfterEmpty`) is reached, finalization is triggered. -For more details on how Karpenter deletes nodes, see [Deleting nodes with Karpenter](../tasks/deprov-nodes/) for details. +For more details on how Karpenter deletes nodes, see [Deprovisioning nodes](../tasks/deprov-nodes/) for details. ### Upgrading nodes @@ -161,4 +161,4 @@ Kubernetes SIG scalability recommends against these features and Karpenter doesn Instead, the Karpenter project recommends `topologySpreadConstraints` to reduce blast radius and `nodeSelectors` and `taints` to implement colocation. {{% /alert %}} -For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](../tasks/running-pods.md) for details. +For more on how, as a developer, you can add constraints to your pod deployment, see [Running pods](../tasks/running-pods/) for details. diff --git a/website/content/en/docs/tasks/running-pods.md b/website/content/en/docs/tasks/running-pods.md index d17ba1f606e9..012cb8e290cf 100755 --- a/website/content/en/docs/tasks/running-pods.md +++ b/website/content/en/docs/tasks/running-pods.md @@ -30,7 +30,7 @@ This allows you to define a single set of rules that apply to both existing and Pod affinity is a key exception to this rule. {{% alert title="Note" color="primary" %}} -Karpenter supports specific [Well-Known Labels, Annotations and Taints](Well-Known Labels, Annotations and Taints) that are useful for scheduling. +Karpenter supports specific [Well-Known Labels, Annotations and Taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) that are useful for scheduling. {{% /alert %}} ## Resource requests (`resources`) @@ -130,7 +130,7 @@ Changing the second operator to `NotIn` would allow the pod to run in `us-west-2 ``` Continuing to add to the example, `nodeAffinity` lets you define terms so if one term doesn't work it goes to the next one. -Here, if `us-west-2a` is not available, the second term will cause the pod to run on a spot instance in us-west-2d. +Here, if `us-west-2a` is not available, the second term will cause the pod to run on a spot instance in `us-west-2d`. ``` @@ -206,7 +206,7 @@ spec: operator: "Exists" effect: "NoSchedule" ``` -See Taints and Tolerations (https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) in the Kubernetes documentation for details. +See [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) in the Kubernetes documentation for details. ## Topology spread (`topologySpreadConstraints`)