diff --git a/website/content/en/preview/tasks/deprovisioning.md b/website/content/en/preview/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/preview/tasks/deprovisioning.md +++ b/website/content/en/preview/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. diff --git a/website/content/en/v0.5.5/tasks/deprovisioning.md b/website/content/en/v0.5.5/tasks/deprovisioning.md index b77dbcb3a2df..2c021826b4c3 100644 --- a/website/content/en/v0.5.5/tasks/deprovisioning.md +++ b/website/content/en/v0.5.5/tasks/deprovisioning.md @@ -20,20 +20,26 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. {{% /alert %}} - + * **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node: ```bash # Delete a specific node kubectl delete node $NODE_NAME - + # Delete all nodes owned any provisioner kubectl delete nodes -l karpenter.sh/provisioner-name - + # Delete all nodes owned by a specific provisioner kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME ``` @@ -44,7 +50,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are {{% alert title="Note" color="primary" %}} By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion. -When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it. +When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it. The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself. All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone. {{% /alert %}} @@ -56,7 +62,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail ### Disruption budgets Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning. -Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made. +Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made. PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster. Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4. diff --git a/website/content/en/v0.5.6/tasks/deprovisioning.md b/website/content/en/v0.5.6/tasks/deprovisioning.md index b77dbcb3a2df..2c021826b4c3 100644 --- a/website/content/en/v0.5.6/tasks/deprovisioning.md +++ b/website/content/en/v0.5.6/tasks/deprovisioning.md @@ -20,20 +20,26 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. {{% /alert %}} - + * **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node: ```bash # Delete a specific node kubectl delete node $NODE_NAME - + # Delete all nodes owned any provisioner kubectl delete nodes -l karpenter.sh/provisioner-name - + # Delete all nodes owned by a specific provisioner kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME ``` @@ -44,7 +50,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are {{% alert title="Note" color="primary" %}} By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion. -When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it. +When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it. The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself. All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone. {{% /alert %}} @@ -56,7 +62,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail ### Disruption budgets Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning. -Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made. +Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made. PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster. Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4. diff --git a/website/content/en/v0.6.0/tasks/deprovisioning.md b/website/content/en/v0.6.0/tasks/deprovisioning.md index b77dbcb3a2df..2c021826b4c3 100644 --- a/website/content/en/v0.6.0/tasks/deprovisioning.md +++ b/website/content/en/v0.6.0/tasks/deprovisioning.md @@ -20,20 +20,26 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. {{% /alert %}} - + * **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node: ```bash # Delete a specific node kubectl delete node $NODE_NAME - + # Delete all nodes owned any provisioner kubectl delete nodes -l karpenter.sh/provisioner-name - + # Delete all nodes owned by a specific provisioner kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME ``` @@ -44,7 +50,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are {{% alert title="Note" color="primary" %}} By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion. -When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it. +When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it. The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself. All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone. {{% /alert %}} @@ -56,7 +62,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail ### Disruption budgets Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning. -Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made. +Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made. PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster. Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4. diff --git a/website/content/en/v0.6.1/tasks/deprovisioning.md b/website/content/en/v0.6.1/tasks/deprovisioning.md index b77dbcb3a2df..2c021826b4c3 100644 --- a/website/content/en/v0.6.1/tasks/deprovisioning.md +++ b/website/content/en/v0.6.1/tasks/deprovisioning.md @@ -20,20 +20,26 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. {{% /alert %}} - + * **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node: ```bash # Delete a specific node kubectl delete node $NODE_NAME - + # Delete all nodes owned any provisioner kubectl delete nodes -l karpenter.sh/provisioner-name - + # Delete all nodes owned by a specific provisioner kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME ``` @@ -44,7 +50,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are {{% alert title="Note" color="primary" %}} By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion. -When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it. +When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it. The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself. All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone. {{% /alert %}} @@ -56,7 +62,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail ### Disruption budgets Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning. -Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made. +Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made. PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster. Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4. diff --git a/website/content/en/v0.6.2/tasks/deprovisioning.md b/website/content/en/v0.6.2/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/v0.6.2/tasks/deprovisioning.md +++ b/website/content/en/v0.6.2/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. diff --git a/website/content/en/v0.6.3/tasks/deprovisioning.md b/website/content/en/v0.6.3/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/v0.6.3/tasks/deprovisioning.md +++ b/website/content/en/v0.6.3/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. diff --git a/website/content/en/v0.6.4/tasks/deprovisioning.md b/website/content/en/v0.6.4/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/v0.6.4/tasks/deprovisioning.md +++ b/website/content/en/v0.6.4/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. diff --git a/website/content/en/v0.6.5/tasks/deprovisioning.md b/website/content/en/v0.6.5/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/v0.6.5/tasks/deprovisioning.md +++ b/website/content/en/v0.6.5/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. diff --git a/website/content/en/v0.7.0/tasks/deprovisioning.md b/website/content/en/v0.7.0/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/v0.7.0/tasks/deprovisioning.md +++ b/website/content/en/v0.7.0/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. diff --git a/website/content/en/v0.7.1/tasks/deprovisioning.md b/website/content/en/v0.7.1/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/v0.7.1/tasks/deprovisioning.md +++ b/website/content/en/v0.7.1/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. diff --git a/website/content/en/v0.7.2/tasks/deprovisioning.md b/website/content/en/v0.7.2/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/v0.7.2/tasks/deprovisioning.md +++ b/website/content/en/v0.7.2/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration. diff --git a/website/content/en/v0.7.3/tasks/deprovisioning.md b/website/content/en/v0.7.3/tasks/deprovisioning.md index 2d458fa74871..845d0ee41caa 100644 --- a/website/content/en/v0.7.3/tasks/deprovisioning.md +++ b/website/content/en/v0.7.3/tasks/deprovisioning.md @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by * **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads. * **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version). + {{% alert title="Note" color="primary" %}} + Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty` + and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not + default a value and will not terminate nodes in that condition. + {{% /alert %}} + {{% alert title="Note" color="primary" %}} Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.