-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Priority Expander sometimes picks all groups as equal #3956
Comments
For anyone reading this. It seems to only be an issue if an ASG/Nodegroup name matches two priorities. From my configuration getting rid of the |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which component are you using?:
cluster-autoscaler with the Helm Chart
What version of the component are you using?:
Component version: v1.19.1
What k8s version are you using (
kubectl version
)?: v1.19.6-eks-49a6c0kubectl version
OutputWhat environment is this in?:
AWS EKS using Managed Node Groups
What did you expect to happen?:
With this configuration for priority expander:
And these flags:
I expected the groups with priority 30 to be the highest priority because the ones marked 40 are ARM nodes and tainted so not applicable to most of my workloads and also have a set size as you'll see in the logs.
What happened instead?:
Sometimes, but I can't figure out what triggers it, all 6 of the non-arm ASGs are marked as highest priority.
How to reproduce it (as minimally and precisely as possible):
It seems to be intermittent but essentially have 3 sets of ASGs (in this case managed by managed node groups, but that shouldn't matter here) for ARM, Spot and On Demand and use my configuration. Sometimes the spot and on demand will be considered the same priority for some reason.
Anything else we need to know?:
Here's the logs of the problem happening:
Mostly it picks the 3 I expect but the configuration is consistent between times where it picks all 6 and picking the correct 3 and ruling out the other 3 because they are lower priority.
The text was updated successfully, but these errors were encountered: