-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expander priority uses AWS auto-scaling groups instead of node group names #3871
Comments
I am also experiencing this issue. Happy to provide any additional diagnostic info as needed. |
I've also got this issue and as the cluster autoscaler has no knowledge of managed node groups, and the priority expander always works on the name of the node group (in this case autoscaling group name) for all cloud providers, I don't think it's likely to be implemented in the autoscaler itself. The conclusion I've come to is that I need to write a little application to generate the priority expander ConfigMap by finding out the ASG details from the managed node group API. When I get something working I'll post it here. |
Just ran in to this as well, I opened an issue on the AWS containers roadmap to be able to set the name or at least the prefix for the name of the underlying ASGs for Managed Node Groups. |
Built a tool to work around this issue: https://github.com/cablespaghetti/priority-expander-eks-managed-nodegroup-configurer It's very much an MVP at the moment and could use some more testing/error handling. However it works! edit: Having an intermittent issue. See: #3956 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
For anyone else landing here, it was fixed in eks 1.21 |
Which component are you using?:
Expander
cluster-autoscaler
chart version: 9.4.0
image tag: v1.18.1
Component version:
kubectl version
OutputWhat environment is this in?:
AWS EKS v1.17 (managed node groups)
What did you expect to happen?:
CA updates the auto-scaling group according to the priorites
What happened instead?:
CA is not recognizing the node groups with their names but instead it is using AWS auto-scaling group names and falling back to picking a random node group.
How to reproduce it (as minimally and precisely as possible):
Deploy the latest CA helm chart in an AWS environment with EKS clusters and set expander to priority
we use AWS EKS node groups, CA seems to use the AWS auto-scaling group (example:
eks-40bbb26b-8679-eb64-d33a-4ba475413529
) instead of node group names (example:app-standard-spot-a-qa-5bkjm
)expander config:
logs:
eks-40bbb26b-8679-eb64-d33a-4ba475413529
is actually an auto-scaling group and NOT a node group. Node groups are named something likeapp-standard-spot-a-qa-5bkjmt
The text was updated successfully, but these errors were encountered: