-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move away from encouraging Deployment for additional schedulers #12785
Comments
If this guide were to recommend enabling leader election, and explained why, would that make using a Deployment suitable / acceptable? |
In my opinion, leader election being false with a single replica is the right approach as the other way round would only mean additional overhead without any gain. In this case I don't see a Deployment fitting. On the other hand if multiple replicas were to be deployed of the scheduler, leader election should be set to true. Leader election would mean the So in a HA scheduler with multiple replicas both StatefulSet and Deployment would be valid while in single instance schedulers without leader election StatefulSet should be prefered. I would also like to mention that for a HA replicated scheduler, a PodAntiAffinity to itself should be set in order to avoid collocating multiple replicas. |
It sounds like we could decide between:
and then tweak the document based on which approach we favor. Does that sound reasonable? |
I was sticking to the simple approach as that document is usually the starting point to extra-scheduler setups. In the simplest case a single instance with leader election disabled as it is now is enough. But with leader election turned off deployment's default rolling updates are dangerous and thats why I suggested to change to StatefulSet. We could actually also add a comment at the end mentioning that to achieve scheduler resillience multiple replicas with leader election turned on and pod anti affinity to itself. In this complex case StatefulSets do not offer a lot over Deployemnts but they are also a nice fit so I would not mention switching to Deployment in this case (KIS). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is a...
/sig scheduling
/kind feature
Problem:
Currently it is suggested in Tasks/Administer a Cluster/Configure multiple schedulers (source) to deploy schedulers with a
Deployment
and disabling leader election.Rolling updates of
Deployment
s may imply that multiple replicas of the scheduler are running concurrently and thus they can interfere each other as leader election was turned off.Proposed Solution:
StatefulSet
s orDeployment
s withRecreate
deployment strategy type would solve this issue. I personally prefer aStatefulSet
as it:ReplicaSet
).Page to Update:
https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#define-a-kubernetes-deployment-for-the-scheduler
The text was updated successfully, but these errors were encountered: