You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is unclear based on the documentation what happens when a pod with a preferredDuringSchedulingIgnoredDuringExecution affinity has a choice between
evicting a lower priority pod on the preferred node type, and taking its place
not doing that, and scheduling onto a node that does not match the preferred criteria
I believe the latter is the default behavior. It would be nice to explicitly call this out in documentation, or better yet, make it configurable. I am trying to run a buffer pod to reserve space on preferred nodes to hedge against scale up delays, and would prefer to always evict such a pod instead of schedule the deploying workload onto non-preferred nodes.
Thanks! :)
The text was updated successfully, but these errors were encountered:
This seems to be an issue related to scheduler and not Cluster Autoscaler. As such it should probably be opened in kubernetes/kubernetes repo and tagged to sig-scheduling.
Cluster Autoscaler only cares about making each pod schedulable, it completely ignores scheduler priority functions (ie. it respects requiredDuringScheduling, it doesn't care whatsoever about preferredDuringScheduling).
Hi
It is unclear based on the documentation what happens when a pod with a
preferredDuringSchedulingIgnoredDuringExecution
affinity has a choice betweenI believe the latter is the default behavior. It would be nice to explicitly call this out in documentation, or better yet, make it configurable. I am trying to run a buffer pod to reserve space on preferred nodes to hedge against scale up delays, and would prefer to always evict such a pod instead of schedule the deploying workload onto non-preferred nodes.
Thanks! :)
The text was updated successfully, but these errors were encountered: