-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autoscaling to zero #1328
Comments
Adding a +1 because it would be a great feature! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
IIRC autoscaling from 0 already works, but it requires extra labels/annotations for the cluster-autoscaler. Implementing this would remove the need for these extra labels/annotations so it would be a bit easier for the user. For me, this wouldnt be very high prio |
This is correct I think. However I don't really want to have to query the flavor and add custom annotations to my machine deployment when a mechanism exists for the infrastructure provider to do this for me. Agree it isn't high, high priority but would certainly be good to implement the support since the hooks already exist in CAPI. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
@nikParasyr @mkjpryor Do either of you want to work on this, btw? Seems like a good feature. |
@mdbooth I dont have any time to work on this (unfortunately) |
/remove-lifecycle stale |
@mdbooth We think a customer might be about to ask for this. If they do then we will work on it. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/kind feature
Describe the solution you'd like
Auto-scaling to and from zero nodes has been implemented upstream - kubernetes/autoscaler#4840.
We should make sure that we implement the provider side of this contract.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The text was updated successfully, but these errors were encountered: