-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Delete "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused #978
Comments
This is an error that sometimes comes up but is hard to reproduce. The kubernetes provider, as configured, does not know anything about the kubeconfig file generated by the module. There is no relationship between them. You don't even need to write the kubeconfig file. The provider is supposed to get all of its configuration from the data sources that you are passing in. I'm guessing you are doing a straight The easiest solution is to drop the
|
This happens every time I try to delete the EKS cluster. Doing the manual remove from the state file resolved it, but makes it painful for CI/CD automation. |
When you got this kind of error, generally it's because your kubernetes provider is miss-configured (due to a bug or human error). Can you please try with the latest version of the kubernetes provider and also remove |
Thanks this works |
Closing this since, you resolved your issue. Feel free to reopen it if needed. |
@barryib We are running into pretty much the same exact issue: the kubeconfig resource is being deleted before the Can you all provide some guidance on how to actually mitigate this? That is, per this comment, what could we have misconfigured in the
We are configuring our The suggestion to run |
This bug hit me in the latest terraform, 0.14 version. |
Yes, it has started happening again after upgrading Terraform to 0.14 and module to 14.0 |
yes, can confirm this is still an issue - please re-open |
YES!!! |
same issue here. Is there any option to fix it? Removing the state before destroying is hacky |
When destroying an EKS cluster, why is terraform using kubectl to destroy the aws-auth of a cluster that is going to be destroyed by AWS API calls milliseconds later? |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
As soon as I try to delete EKS cluster, it fails at k8s config map (aws-auth) deletion:
Provider Config:
Although, I have given load_config as false in kubernetes provider, I thought this can be related to kubeconfig (one created by using write_kubeconfig as true) being deleted before this config map, so i added this:
But, even now i get the same error, i can confirm my kubeconfig file exists, but post deletion of node_groups i get this error. Any help will be highly appreciated. Many thanks in advance.
The text was updated successfully, but these errors were encountered: