-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Daemonset Eviction during Scale down #4337
Comments
Hi. If I'm right then I believe setting the new I'm adding to this case because it is similar enough to the request I was about to make. We have application deployments with a preStop lifecycle to make sure they complete their work before they shutdown. In order to prove that the daemonsets were being terminated too early I set up a test.
After deploying the above it looks like so:
I then edit the
I then waited for cluster autoscaler to start a scale in and caught it at this point:
Here you can see a new Then a bit later I see this:
The old The above was tested with cluster autoscaler version
Is it possible to provide a way for the daemonset evictions to wait until all other pods are gone or in the |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Encountered the same issue. We have pods that have preStop hooks with sleep commands. In our case, we have statefulsets that that depend on When CA scales down nodes all the pods are evicted including the From the previous comment I see mention of the If anyone has solved this issue, feel free to comment here, I would greatly appreciate it. |
This can now be solved using |
Thank you for pointing this out. |
Hello :
I want to understand how to handle the below scenario with CA.
When EKS CA decides to scale down a node (which is a part of managed node-group) which has a daemonset like fluent-bit (shipping logs from apps) & SignalFx (tracing & metrics), what configuration i need to have on CA to make sure that daemonset are not evicted as app may be using this during this scaling down time (under their graceful timeout window).?
Is there a config on CA setup to skip the daemonset eviction and allow them to run until the node is terminated.
I am good even if these daemonset are not gracefully stopped as app using them are gracefully stopping (with their own graceful shutdown timeouts)
My current CA configuration (EKS1.21)
Image :
k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0
Thank you
Balaji
The text was updated successfully, but these errors were encountered: