-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Destroy fails with Error: Unauthorized when removing kubernetes resources and access token is used. #27741
Comments
Hi @jaceq Since the error mentioned here is coming from the provider, and terraform can only safely remove resources from the state when the provider reports them as being removed, there's not much that can be done from within terraform itself. I believe the ability remove all state related to the particular provider would be covered by the request in #27728. Attempting to create and destroy resources when a provider itself depends on those resources is not recommended and can be quite difficult to achieve with the design of terraform. There are probably more users in the community forum familiar with these multi-layered setups for EKS which may be a better source if information. We use GitHub issues for tracking bugs and enhancements, rather than for questions. With that out of the way, It's not clear exactly what is failing in this case, other than the provider is returning an error. If this is a case of the provider configuration being stale and needing a new token, is does running As mention above, destroying the infrastructure that the provider itself depends on is often likely to fail, and in these cases the usual recommendation is to have multiple configurations, one to setup the base infrastructure, and one to deploy the additional layer upon that infrastructure. |
Hi @jbardin Thanks for quick response. |
Just to be more clear (as I get why you got an idea this is provider based), currently it seems that if a data source based token is used for authorization (for any provider), any deletions of resources belonging to such provider will fail (if done after TTL of a previous token). |
This commit is scheduled to go into Terraform 0.15 according to the changelog. I believe this will resolve the issue with authentication during destroys (in most cases), since it will refresh the data source containing the Kubernetes credentials prior to the destroy. However, we will still hit this case during long-running applies/destroys, since I believe the data source is only refreshed once during an apply/destroy. An example of this failing is when you're using EKS with a long-running apply or destroy. An EKS token is only valid for 15 minutes, so if the apply or destroy runs for longer than that, we'll still hit this issue until progressive apply is solved. For these reasons, it is easiest to keep the Kubernetes resources in a separate state from the underlying cluster, and use two applies. However, if you really need a single-apply configuration, we have some examples in the Kubernetes provider repo that demonstrate working configurations for AKS, EKS, and GKE. If you have the option of using an exec block like this, it can ensure your token is always up-to-date, but this only works if you're able to install the binary on the system running Terraform:
I still need to add the equivalent of this to the GKE and AKS examples though. There is more work to be done there. |
Wow @dak1n1, indeed it seems commit you mentioned will solve this issue. |
Thanks for the additional info @dak1n1! That is the PR I was about to mention. I'm going to close this as the initial issue reported is a duplicate of #27172. @dak1n1 did a great job of summing up the other considerations, and we have open proposals already for improving the workflow in general. Since any major change to the workflow is unlikely in the near term due to the large architectural changes required, we still suggest using separate configurations as shown in the linked documentation. Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
But this seem to affect other versions: 0.14.X
Terraform Configuration Files
Crash Output
Expected Behavior
A destroy operation should succeed.
Actual Behavior
During destroy operation error:
shows up, after that resources from kubernetes provider remain in state (and aren't deleted).
Things get more complicates when same state has GKE / EKS cluster, as we do and we end up in situation where cluster itself gets deleted but kubernetes resources remain in state. Given that at that stage cluster isn't there anymore, kubernetes provider failes and this renders state unusable.
Steps to Reproduce
Configure a state with a GKE / EKS and a couple of kubernetes resources and build it.
Use 'access token' to configure kubernetes provider.
Wait for at least one hour!
Try to destroy.
Additional Context
Within a discussion here: terraform-aws-modules/terraform-aws-eks#1162 someone figured this out.
Long story short it seems that as of terraform 0.14 there is NO refresh operation done before destroy.
This leads to situation where data source:
isn't refresh, and hence no new token for kubernetes provider is fetched. (Given that token seems to be valid for 1 hours, a apply and destroy operation will succeed if they are done within that timeframe).
A workaround is to run refresh manually before destroy operation.
References
terraform-aws-modules/terraform-aws-eks#1162
The text was updated successfully, but these errors were encountered: