-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Autoscaler doesn't scale up empty ASGs with podTopologySpread #4362
Comments
We are experiencing the same problem. @evansheng I didn't try your workaround yet, but the explanation seems rather confusing to me. Even without adding the zone labels as tags to the ASG, I can make cluster-autoscaler scale up ASGs that have 0 instances when my pods contain a label selector for an availability zone. This indicates that cluster-autoscaler is actually aware of the zones of ASGs that have 0 instances and should also be able to use this information to correctly scale according to topology spread constraints. (Obviously it's not, that's the confusing part for me 🙂) |
Haven't looked into this too deeply, but unsure if the label selector works in exactly the same way as the constraint for pod topology spreads When we were investigating this - we saw pod topology spread failures coming up as the execution got called out to scheduler predicate checking code. Are you using AWS? / are you selecting on the well known annotations & taints from kubelet? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
We did some investigating and believe this is fixed in v1.22+. Prior to this version, only the deprecated topology labels (such as |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which component are you using?:
Cluster-autoscaler
What version of the component are you using?:
Component version: 1.18.3
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
AWS
What did you expect to happen?:
All ASGs we have in the cluster to scale normally, according to priority expander. (Though this bug would happen with any expander type chosen)
What happened instead?:
We found that empty ASGs weren't scaling up, even when they were the highest tier in the priority expander configmap ladder.
The Cluster Autoscaler logs were citing the empty ASG nodes weren't matching PodTopologySpread predicate constraints. However, the ASG chosen to scale up has the same availability zone (which we specified in our PTS spec), as the one cited as failing. An example log line is shown below
How to reproduce it (as minimally and precisely as possible):
Add empty ASG and node type to cluster, while using pod topology spread constraints on all workloads across availability zones. Add the new node type to the top of the priority expander configmap ladder, and observe the logs.
Every scale up will trigger log lines saying the new node type does not fulfill pod topology spread constraints and fall back to a lower priority ASG that is not empty.
Anything else we need to know?:
I traced through the code and found the underlying issue.
When Cluster Autoscaler looks at node groups to scale, it does one of two things to fill out a nodeInfo object to use in the clusterSnapshot
Reference
However, certain labels (including
topology.kubernetes.io/zone
) are applied by kubelet after node start. This means that the empty ASG nodeTemplates don't have the zone label applied, and will never fulfill the pod topology spread constraint.We have worked around this by adding the zone label manually to all of our ASG templates, but it is not very intuitive that this must be done in order to use cluster autoscaler with pod topology spread.
I believe a fix for this would be to add all kubelet
Well-Known Annotations and Taints
to the nodeInfo structs when populating them from ASG, as they should be static and determinable from each ASG.The text was updated successfully, but these errors were encountered: