Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster Autoscaler doesn't scale up empty ASGs with podTopologySpread #4362

Closed
evansheng opened this issue Sep 29, 2021 · 10 comments
Closed
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@evansheng
Copy link
Contributor

Which component are you using?:

Cluster-autoscaler

What version of the component are you using?:

Component version: 1.18.3

What k8s version are you using (kubectl version)?:

kubectl version Output
$ kubectl version 1.17

What environment is this in?:

AWS

What did you expect to happen?:
All ASGs we have in the cluster to scale normally, according to priority expander. (Though this bug would happen with any expander type chosen)

What happened instead?:
We found that empty ASGs weren't scaling up, even when they were the highest tier in the priority expander configmap ladder.

The Cluster Autoscaler logs were citing the empty ASG nodes weren't matching PodTopologySpread predicate constraints. However, the ASG chosen to scale up has the same availability zone (which we specified in our PTS spec), as the one cited as failing. An example log line is shown below

scale_up.go:284] Pod overprovisioned-pause-pod-general-57f645b766-2lk6x can't be scheduled on k8s-node-mesh-b-ea1-us-m5d12xl-asg-us-east-1b, predicate checking error: node(s) didn't match pod topology spread constraints; predicateName=PodTopologySpread; reasons: node(s) didn't match pod topology spread constraints; debugInfo=
.
.
priority.go:166] priority expander: k8s-node-mesh-b-ea1-us-c5ad12xl-asg-us-east-1b chosen as the highest available
priority.go:166] priority expander: k8s-node-mesh-b-ea1-us-c5ad16xl-asg-us-east-1b chosen as the highest available
scale_up.go:452] Best option to resize: k8s-node-mesh-b-ea1-us-c5ad12xl-asg-us-east-1b

How to reproduce it (as minimally and precisely as possible):

Add empty ASG and node type to cluster, while using pod topology spread constraints on all workloads across availability zones. Add the new node type to the top of the priority expander configmap ladder, and observe the logs.
Every scale up will trigger log lines saying the new node type does not fulfill pod topology spread constraints and fall back to a lower priority ASG that is not empty.

Anything else we need to know?:

I traced through the code and found the underlying issue.
When Cluster Autoscaler looks at node groups to scale, it does one of two things to fill out a nodeInfo object to use in the clusterSnapshot

Reference

  1. If there is an existing example of the nodeGroup in the cluster, it copies the nodeInfo with existing labels, annotations and daemonsets
  2. If there is not (empty ASG), it populates the nodeInfo object from the ASG template from the cloud provider.

However, certain labels (including topology.kubernetes.io/zone) are applied by kubelet after node start. This means that the empty ASG nodeTemplates don't have the zone label applied, and will never fulfill the pod topology spread constraint.

We have worked around this by adding the zone label manually to all of our ASG templates, but it is not very intuitive that this must be done in order to use cluster autoscaler with pod topology spread.

I believe a fix for this would be to add all kubelet Well-Known Annotations and Taints to the nodeInfo structs when populating them from ASG, as they should be static and determinable from each ASG.

@evansheng evansheng added the kind/bug Categorizes issue or PR as related to a bug. label Sep 29, 2021
@evansheng evansheng changed the title Cluster Autoscaler doesn't scale up empty ASGs with nodeTopologySpread Cluster Autoscaler doesn't scale up empty ASGs with podTopologySpread Sep 29, 2021
@alfredkrohmer
Copy link
Contributor

alfredkrohmer commented Oct 14, 2021

We are experiencing the same problem.

@evansheng I didn't try your workaround yet, but the explanation seems rather confusing to me. Even without adding the zone labels as tags to the ASG, I can make cluster-autoscaler scale up ASGs that have 0 instances when my pods contain a label selector for an availability zone. This indicates that cluster-autoscaler is actually aware of the zones of ASGs that have 0 instances and should also be able to use this information to correctly scale according to topology spread constraints. (Obviously it's not, that's the confusing part for me 🙂)

@evansheng
Copy link
Contributor Author

Haven't looked into this too deeply, but unsure if the label selector works in exactly the same way as the constraint for pod topology spreads

When we were investigating this - we saw pod topology spread failures coming up as the execution got called out to scheduler predicate checking code.

Are you using AWS? / are you selecting on the well known annotations & taints from kubelet?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 17, 2022
@jfoy
Copy link

jfoy commented Feb 24, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 24, 2022
@jutley
Copy link

jutley commented Feb 24, 2022

We did some investigating and believe this is fixed in v1.22+. Prior to this version, only the deprecated topology labels (such as failure-domain.beta.kubernetes.io/zone) were supported. #4053 adds support for the new labels. Anyone affected by this issue can either upgrade to v1.22 (which should involve upgrade the Kubernetes clusters to v1.22 as well) or use the deprecated labels until upgrades are possible.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 25, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants