Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding note for defaulting behavior of deprovisioning #1579

Merged
merged 1 commit into from
Mar 28, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions website/content/en/preview/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,13 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
- Automated deprovisioning is configured through the ProvisionerSpec .ttlSecondsAfterEmpty
and .ttlSecondsUntilExpired fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.

- Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for
example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into
the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:
Expand Down
19 changes: 12 additions & 7 deletions website/content/en/v0.5.5/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,24 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
- Automated deprovisioning is configured through the ProvisionerSpec .ttlSecondsAfterEmpty
and .ttlSecondsUntilExpired fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.

- Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for
example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into
the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:

```bash
# Delete a specific node
kubectl delete node $NODE_NAME

# Delete all nodes owned any provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name

# Delete all nodes owned by a specific provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
```
Expand All @@ -44,7 +49,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are

{{% alert title="Note" color="primary" %}}
By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself.
All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone.
{{% /alert %}}
Expand All @@ -56,7 +61,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail
### Disruption budgets

Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.

PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster.
Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4.
Expand Down
19 changes: 12 additions & 7 deletions website/content/en/v0.5.6/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,24 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
- Automated deprovisioning is configured through the ProvisionerSpec .ttlSecondsAfterEmpty
and .ttlSecondsUntilExpired fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.

- Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for
example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into
the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:

```bash
# Delete a specific node
kubectl delete node $NODE_NAME

# Delete all nodes owned any provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name

# Delete all nodes owned by a specific provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
```
Expand All @@ -44,7 +49,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are

{{% alert title="Note" color="primary" %}}
By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself.
All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone.
{{% /alert %}}
Expand All @@ -56,7 +61,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail
### Disruption budgets

Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.

PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster.
Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4.
Expand Down
19 changes: 12 additions & 7 deletions website/content/en/v0.6.0/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,24 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
- Automated deprovisioning is configured through the ProvisionerSpec .ttlSecondsAfterEmpty
and .ttlSecondsUntilExpired fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.

- Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for
example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into
the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:

```bash
# Delete a specific node
kubectl delete node $NODE_NAME

# Delete all nodes owned any provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name

# Delete all nodes owned by a specific provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
```
Expand All @@ -44,7 +49,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are

{{% alert title="Note" color="primary" %}}
By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself.
All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone.
{{% /alert %}}
Expand All @@ -56,7 +61,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail
### Disruption budgets

Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.

PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster.
Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4.
Expand Down
19 changes: 12 additions & 7 deletions website/content/en/v0.6.1/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,24 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
- Automated deprovisioning is configured through the ProvisionerSpec .ttlSecondsAfterEmpty
and .ttlSecondsUntilExpired fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.

- Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for
example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into
the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:

```bash
# Delete a specific node
kubectl delete node $NODE_NAME

# Delete all nodes owned any provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name

# Delete all nodes owned by a specific provisioner
kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
```
Expand All @@ -44,7 +49,7 @@ If the Karpenter controller is removed or fails, the finalizers on the nodes are

{{% alert title="Note" color="primary" %}}
By adding the finalizer, Karpenter improves the default Kubernetes process of node deletion.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
When you run `kubectl delete node` on a node without a finalizer, the node is deleted without triggering the finalization logic. The instance will continue running in EC2, even though there is no longer a node object for it.
The kubelet isn’t watching for its own existence, so if a node is deleted the kubelet doesn’t terminate itself.
All the pod objects get deleted by a garbage collection process later, because the pods’ node is gone.
{{% /alert %}}
Expand All @@ -56,7 +61,7 @@ There are a few cases where requesting to deprovision a Karpenter node will fail
### Disruption budgets

Karpenter respects Pod Disruption Budgets (PDBs) by using a backoff retry eviction strategy. Pods will never be forcibly deleted, so pods that fail to shut down will prevent a node from deprovisioning.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.
Kubernetes PDBs let you specify how much of a Deployment, ReplicationController, ReplicaSet, or StatefulSet must be protected from disruptions when pod eviction requests are made.

PDBs can be used to strike a balance by protecting the application's availability while still allowing a cluster administrator to manage the cluster.
Here is an example where the pods matching the label `myapp` will block node termination if evicting the pod would reduce the number of available pods below 4.
Expand Down
9 changes: 7 additions & 2 deletions website/content/en/v0.6.2/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,13 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
- Automated deprovisioning is configured through the ProvisionerSpec .ttlSecondsAfterEmpty
and .ttlSecondsUntilExpired fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.

- Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for
example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into
the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:
Expand Down
9 changes: 7 additions & 2 deletions website/content/en/v0.6.3/tasks/deprovisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,13 @@ There are both automated and manual ways of deprovisioning nodes provisioned by
* **Node expired**: Karpenter requests to delete the node after a set number of seconds, based on the provisioner `ttlSecondsUntilExpired` value, from the time the node was provisioned. One use case for node expiry is to handle node upgrades. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).

{{% alert title="Note" color="primary" %}}
Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for example, if a cluster
brings up all nodes at once, all the pods on those nodes would fall into the same batching window on expiration.
- Automated deprovisioning is configured through the ProvisionerSpec .ttlSecondsAfterEmpty
and .ttlSecondsUntilExpired fields. If either field is left empty, Karpenter will not
default a value and will not terminate nodes in that condition.

- Keep in mind that a small NodeExpiry results in a higher churn in cluster activity. So, for
example, if a cluster brings up all nodes at once, all the pods on those nodes would fall into
the same batching window on expiration.
{{% /alert %}}

* **Node deleted**: You could use `kubectl` to manually remove a single Karpenter node:
Expand Down
Loading