-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--balance-similar-node-groups and scale from zero don't work together for EBS volumes #4305
Comments
I think you are using aws-ebs-csi driver : #3845 |
I think it's similar but I'm using the following label instead: https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone I just happened to include the CSINode log in my log snippet What's interesting is that I was doing some more testing for this and the scale from 0 worked if the stateful set was a brand new pod. So if it was the first time deploying the stateful set, the scale up would work. If I had a working node in the EBS volume region and then I manually did something like a |
Stable topology labels were added recently to cloudprovider/aws' template node builder. 1.22 should work but older version would infer a
|
Yes I understand what you mean I noticed the same issue and cluster autoscaler complaining about a missing CSI Node. My understanding is that it tries to read via the API the CSI Node object for the node template which does not exist because it is scaled to zero and the node template is not a real node. I didn’t had time to do more debugging if you have more information I’d be happy to hear them. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
any updates? |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0
Helm chart version
9.10.6
What k8s version are you using (
kubectl version
)?:What environment is this in?:
AWS EKS
What did you expect to happen?:
If I have a stateful set that requires a pod in the same AZ as the EBS volume, and there are ZERO available pods in that AZ, I'd expect the
--scale-from-zero
and--balance-similar-node-groups
to work together.What happened instead?:
cluster-autoscaler
just repeats saying that no node is available that matches thenodeSelector
that I have applied to my stateful set:The text was updated successfully, but these errors were encountered: