Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clarify behavior of node affinity in the context of evictable pods #2804

Closed
irl-segfault opened this issue Feb 6, 2020 · 1 comment
Closed

Comments

@irl-segfault
Copy link

Hi

It is unclear based on the documentation what happens when a pod with a preferredDuringSchedulingIgnoredDuringExecution affinity has a choice between

  • evicting a lower priority pod on the preferred node type, and taking its place
  • not doing that, and scheduling onto a node that does not match the preferred criteria

I believe the latter is the default behavior. It would be nice to explicitly call this out in documentation, or better yet, make it configurable. I am trying to run a buffer pod to reserve space on preferred nodes to hedge against scale up delays, and would prefer to always evict such a pod instead of schedule the deploying workload onto non-preferred nodes.

Thanks! :)

@MaciekPytel
Copy link
Contributor

This seems to be an issue related to scheduler and not Cluster Autoscaler. As such it should probably be opened in kubernetes/kubernetes repo and tagged to sig-scheduling.

Cluster Autoscaler only cares about making each pod schedulable, it completely ignores scheduler priority functions (ie. it respects requiredDuringScheduling, it doesn't care whatsoever about preferredDuringScheduling).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants