-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Autoscaler: Having resource "slack" without overprovisioning #4384
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
CA only support scaling-up based on Pending pods and it calculates how many nodes are needed by using embedded scheduler code to simulate scheduling of those pods. There is no formula there and so the result can't be easily tweaked using some metrics. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi dear community,
I humbly ask for your help 🙏 with the puzzle I can not solve myself:
Can Cluster Autoscaler have some resource "slack" to have excess nodes always available without overprovisioning ?
Or it has a fixed dependency on having "Pending" status pods as a scale up trigger? 🤔
Just curious if we can provide some resource slack without having a "ballast" load running on cluster? 😇
Thank you very much!
The text was updated successfully, but these errors were encountered: