-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support overprovsioning without pending pods #5377
Comments
@grosser would the following not work for you?
|
That does not work.
|
I am not sure about the behavior of descheduler with cluster-autoscaler so I will let someone else answer this.
This should work. I wonder if the pods you want to schedule have a higher PriorityClass (default |
Re scheduler: it does not preempt setup:
workloads on the full node were not preempted |
so that's why "imaginary pods" would allow the scheduler to use the right az for ScheduleAnyway |
@grosser when you say
You might want to use |
I'm using
and with the scheduler not pre-empting other workloads it does not matter if we have a buffer |
I found a hit for tests around This might need a deeper look. |
It supports it, but only for DoNotScheule
…On Mon, Mar 20, 2023 at 10:44 PM Suraj Banakar(बानकर) | スラジ < ***@***.***> wrote:
I found a hit for tests around TopologySpreadConstraint this in the code
<https://github.com/kubernetes/autoscaler/blob/18f2e67c4f1bed6bbb1e3273bfcd387505a090aa/cluster-autoscaler/estimator/binpacking_estimator_test.go#L63-L77>
and a related issue which seems to suggest CA does support
TopologySpreadConstraint
<#3879>.
This might need a deeper look.
—
Reply to this email directly, view it on GitHub
<#5377 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAACYZ3W3SAFW4QGQDEV5V3W5E53BANCNFSM6AAAAAATE7LEOQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Interesting.
I am not sure if cluster-autoscaler should be supporting this since it seems like a soft constraint (related). I wonder what problems |
problems that DoNotSchedule creates:
|
I would expect |
That does work, the scheduler will kick out the overprovisioned pods. |
here is a PR to support this #5611 |
thx, but they don't really help me ... I want to make ScheduleAnyway behave nicer to reduce our skew. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@grosser you might find https://github.com/kubernetes/autoscaler/pull/5848/files interesting. |
thx, looks interesting |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/reopen I tried ProvisionRequest, but it's not what I want:
... what I want is a "ProvisionRequest" that is global + never expires |
@grosser: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
any thoughts on reading a CRD that we than convert to an in-memory ProvisionRequest so CA makes room, but the actual scheduler ignores it ? |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Which component are you using?:
cluster-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
we use pending pods for overprovisioning atm, but that results in there not being "real" gaps,
so when an az preferred workload wants to get scheduled it always goes to the open az instead of kicking out an overprovisioned pod
Describe the solution you'd like.:
make CA pretend there are pending pods when doing the math and scale based on that
Describe any alternative solutions you've considered.:
current recommended approahc
Additional context.:
#4384 describes the same issue but lacks context
/cc @mattyev87 @MaciekPytel
The text was updated successfully, but these errors were encountered: