Skip to content

Commit

Permalink
Adding note for defaulting behavior of deprovisioning
Browse files Browse the repository at this point in the history
  • Loading branch information
njtran committed Mar 28, 2022
1 parent b141f99 commit c328250
Show file tree
Hide file tree
Showing 13 changed files with 98 additions and 20 deletions.
6 changes: 6 additions & 0 deletions website/content/en/preview/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads.
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty`
and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.
{{% /alert %}}

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
Expand Down
16 changes: 11 additions & 5 deletions website/content/en/v0.5.5/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,20 +20,26 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads.
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty`
and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.
{{% /alert %}}

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:

```bash
# Delete a specific node
kubectl delete node $NODE_NAME

# Delete all nodes owned any provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name

# Delete all nodes owned by a specific provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
```
Expand All @@ -44,7 +50,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are

{{% alert title="Note" color="primary" %}}
By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself.
All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone.
{{% /alert %}}
Expand All @@ -56,7 +62,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail
### Disruption budgets

Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.

PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster.
Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4.
Expand Down
16 changes: 11 additions & 5 deletions website/content/en/v0.5.6/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,20 +20,26 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads.
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty`
and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.
{{% /alert %}}

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:

```bash
# Delete a specific node
kubectl delete node $NODE_NAME

# Delete all nodes owned any provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name

# Delete all nodes owned by a specific provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
```
Expand All @@ -44,7 +50,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are

{{% alert title="Note" color="primary" %}}
By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself.
All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone.
{{% /alert %}}
Expand All @@ -56,7 +62,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail
### Disruption budgets

Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.

PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster.
Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4.
Expand Down
16 changes: 11 additions & 5 deletions website/content/en/v0.6.0/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,20 +20,26 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads.
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty`
and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.
{{% /alert %}}

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:

```bash
# Delete a specific node
kubectl delete node $NODE_NAME

# Delete all nodes owned any provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name

# Delete all nodes owned by a specific provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
```
Expand All @@ -44,7 +50,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are

{{% alert title="Note" color="primary" %}}
By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself.
All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone.
{{% /alert %}}
Expand All @@ -56,7 +62,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail
### Disruption budgets

Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.

PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster.
Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4.
Expand Down
16 changes: 11 additions & 5 deletions website/content/en/v0.6.1/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,20 +20,26 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads.
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty`
and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.
{{% /alert %}}

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:

```bash
# Delete a specific node
kubectl delete node $NODE_NAME

# Delete all nodes owned any provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name

# Delete all nodes owned by a specific provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
```
Expand All @@ -44,7 +50,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are

{{% alert title="Note" color="primary" %}}
By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself.
All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone.
{{% /alert %}}
Expand All @@ -56,7 +62,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail
### Disruption budgets

Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.

PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster.
Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4.
Expand Down
6 changes: 6 additions & 0 deletions website/content/en/v0.6.2/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads.
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty`
and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.
{{% /alert %}}

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
Expand Down
6 changes: 6 additions & 0 deletions website/content/en/v0.6.3/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,12 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node empty**: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by `ttlSecondsAfterEmpty` in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads.
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Automated deprovisioning is configured through the `ProvisionerSpec` `.ttlSecondsAfterEmpty`
and `.ttlSecondsUntilExpired` fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.
{{% /alert %}}

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
Expand Down
Loading

0 comments on commit c328250

Please sign in to comment.