-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.19.2 Does not start without env AWS_REGION #4389
Comments
Hey, thanks for raising this. With a bit of testing it appears this only affects the I'll see if we can get a PR in to update the SDK enough to resolve this for the 1.19 release branch. /area provider/aws |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Hi, is there a tagged 1.19 release with this fix? |
Bumping as well, hitting this crash while upgrading a cluster to 1.19. |
Getting a similar issue as well, but I am getting a 401 - not a 404. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which component are you using?:
eu.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.19.2
What version of the component are you using?:
v1.19.2
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
AWS
What did you expect to happen?:
Upgrade from 1.19.1 to 1.19.2 to work with existing configuration.
What happened instead?:
The 1.19.2 crashes with following:
How to reproduce it (as minimally and precisely as possible):
Upgrade from 1.19.1 to 1.19.2 and do not have
AWS_REGION
environment variable set.Anything else we need to know?:
I think this is due to change introduced in #4127 which was cherry-picked to 1.19.2 (and others) in PR #4265
It seems that there is a bug with the current version of
github.com/aws/aws-sdk-go v1.28.2
in 1.19.2.I tested the
GetCurrentAwsRegion()
with that and it indeed seems to fail with the same 404. Although a newer version likegithub.aaakk.us.kg/aws/aws-sdk-go v1.40.57
works as it should.This probably affects other releases of cluster-autoscaler as well.
The text was updated successfully, but these errors were encountered: