-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Autoscaler for AWS failing to get availability zone for ASG #5002
Comments
I worked around this issue by opting for autoDiscovery instead of static autoscalingGroups. However now I am getting a list of all nodes in my cluster with So now my policy looks like this:
My helm values:
|
Cluster autoscaler status configmap:
|
The |
Which component are you using?:
cluster-autoscaler for aws
What version of the component are you using?:
Helm chart version 9.19.2
App/Image version 1.23.0
Component version:
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
AWS EKS
What did you expect to happen?:
I deployed the cluster-autoscaler expecting to see it scale my nodes up and down
What happened instead?:
No scaling happened, I saw the following logs:
How to reproduce it (as minimally and precisely as possible):
I used eksctl to create a cluster with a managed node group specifying availability zones like so:
Then I installed with Helm values set like the following:
I attached a policy with a role annotation. I opened up the policy for testing purposes:
Anything else we need to know?:
The Launch Template for the ASG does not have availability zone set, but I am not sure whether this is because the node group has subnets set which constrain it to availability zones. If this is the case I would imagine this should be a supported configuration.
Thanks in advance.
The text was updated successfully, but these errors were encountered: