You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When Draining nodes from a Kubespray cluster, it is possible to hit a scenario where the drain does not complete and the drain_grace_period and drain_timeout are hit and the task fails. Since ignore_errors is set to true, things move on as if no error occurred and ultimately the role remove-node/post-remove which runs deletes the node prior to being drained.
In my case the pod being drained was waiting for a another pod to come up to satisfy a disruption budget, and ideally the drain would have failed and the node would not have been deleted. To further complicate the issue, the pod that had the node deleted from under it had a persistent volume, so the pod fails to start up on another node because the volume can't be mounted twice.
I'd like to propose the following improvements to handle node draining with more care
Add retries to the drain task to give more than one chance to connect and drain.
Remove ignore_errors from drain tasks
The text was updated successfully, but these errors were encountered:
dlouks
changed the title
Nodes are Deleted after Drain Failures or When Draining Failures
Nodes are Deleted after Drain Failure or Connection Failure on Drain Task
Jan 12, 2021
dlouks
changed the title
Nodes are Deleted after Drain Failure or Connection Failure on Drain Task
Nodes are Deleted after Unsuccessful Drain
Jan 12, 2021
Environment:
Cloud provider or hardware configuration: vsphere
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): CentoOS 7Version of Ansible (
ansible --version
): 2.9.16Version of Python (
python --version
): 2.7.5Kubespray version (commit) (
git rev-parse --short HEAD
): 5517e62Network plugin used: Calico
Command:
ansible-playbook remove-node.yml -i hosts.ini -e node=NODE_NAME -e reset_nodes=false
Task with Bug
kubespray/roles/remove-node/pre-remove/tasks/main.yml
Line 13 in 5517e62
When Draining nodes from a Kubespray cluster, it is possible to hit a scenario where the drain does not complete and the
drain_grace_period
anddrain_timeout
are hit and the task fails. Sinceignore_errors
is set to true, things move on as if no error occurred and ultimately the roleremove-node/post-remove
which runs deletes the node prior to being drained.In my case the pod being drained was waiting for a another pod to come up to satisfy a disruption budget, and ideally the drain would have failed and the node would not have been deleted. To further complicate the issue, the pod that had the node deleted from under it had a persistent volume, so the pod fails to start up on another node because the volume can't be mounted twice.
I'd like to propose the following improvements to handle node draining with more care
The text was updated successfully, but these errors were encountered: