-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Token not being set in provider when trying to upgrade the cluster #1095
Comments
@gappan are you using the default token that's coming from the EKS resource? It may be expiring before terraform makes the request |
@aareet sorry for the late response, it is not about the token expiring, if you look into the logs, the token itself is not getting set in the first place when the api call is made . Also when I am not trying to upgrade the cluster it works as expected. |
I'm curious about the cluster upgrade -- is this replacing the underlying EKS cluster? Or replacing the cluster's authentication credentials (host, certs, token)? If so, the Kubernetes provider will be initialized before these credentials exist. I think the initialization order can even cause errors like this where the token is omitted completely from the API call, (though I'd have to test that to know for sure). There is a known limitation with using a single apply to configure a cluster with Kubernetes resources. I wrote this short example guide that will walk you through updating or replacing an EKS cluster as needed. It also contains example code which uses the exec block to fetch a fresh token during each apply, which could help mitigate issues with the data source token expiring. Here's the exec block example:
EDIT: I forgot to mention one other option: if you apply the EKS changes in a separate apply from the Kubernetes resources, it should work every time, and then you won't have to worry about expired credentials and work-arounds. |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
Plan outputs of both cases.
As you can see the Authorization Header with Bearer token is not being set in the request when the auth token is set by the data.aws_eks_cluster_auth.cluster-0_cluster-0.token , and vice versa when i ran the plan hardcoding a placeholder token.
Steps to Reproduce
Expected Behavior
We hit this issue when we were trying to upgrade the eks cluster, the provider fails to get the instantiate with the auth token and hence the plan fails. These resources should not be affected during the plan.
Actual Behavior
Terraform plan fails as kubernetes provider fails to authenticate to the cluster.
Community Note
The text was updated successfully, but these errors were encountered: