-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform destroy of helm_release resources. #593
Comments
@Ragib95 This is expected behaviour due to a limitation in Terraform that causes it to not recognise the implicit dependency between the Helm resource and the EKS cluster resource. Terraform tries to parallelise the destroy operations when no dependency is known between the resources. This can lead to the EKS cluster being destroyed before the Helm release itself. I'd suggest setting an explicit dependency on the EKS cluster resource in the
|
We currently don't have a way to know what resources are created. We will have to wait for helm/helm#2378 to be implemented. |
I am unable to
Is this another manifestation of this issue, or should I open a separate one? |
Issue closed, but not fixed. |
I got same error when I tried to destroy resource with terraform. The helm release got deleted, but the pods were in "Terminating" status. And I found that all of helm chart resources got this issue. my terraform structure: Any solution or ideas? |
Seems like the referenced helm issue has been fixed by helm/helm#9702 |
@alexsomesan as mentioned earlier in this thread helm/helm#9702 seems to solve this issue from within Helm. Then I think it can be solved in the Terraform Helm provider by adding a new Don't exactly know how to do it, but if you could point me in the right direction I could give it a try. |
Any update on the Terraform side for helm/helm#9702? |
I believe this was resolved by #786. After upgrading the Helm provider to 2.4, the 'wait' attribute of the |
I think is nope. I used
|
Hi, #786 is an impressive MR (to say the least)! I'm not brave enough to go dig into it. Do we need a test scenario for the wait on destroy? |
Our current workaround, which aint great but... yeah...
Putting a fixed sleep timer does the job, waiting more than necessary but does the job for now :/ |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
Terraform Version and Provider Version
Provider Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
Panic Output
Expected Behavior
helm_release destruction should wait for all resources (pods, services, and ingress) to be in a destroyed state before going into Destruction complete state.
Actual Behavior
It's going into Destruction complete state within 7-8 secs before pods and services are fully destroyed. This results in EKS node destruction getting started and leaves ELB attached to service.
Reason:- Before helm is releasing pods and services, terraform started deleting node and cluster leaving pods in Terminating state.
Steps to Reproduce
terraform destroy
Important Factoids
References
Community Note
The text was updated successfully, but these errors were encountered: