-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm resources are not recreated if the underlying cluster is destroyed #591
Comments
For me it' the same root cause as here #593 I'd suggest setting an explicit dependency on the cluster resource in the helm_release resource, like this:
|
i experience the same as n-oden (on azure) and adding a depends_on for the cluster doesn't fix it. |
Belatedly @sebglon using |
Yep, same issue here. If cluster is recreated or helm release is deleted directly on cluster, the |
same issue here on azure. Are there any workarounds for this? Currently I remove the helm_release from the statefile manually, but I really hate doing this. |
@michelefa1988 so far I haven't found any particularly good workarounds. You can potentially mitigate the issue by creating a random_id string, interpolating it into the name of all of your helm releases and setting its |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
pretty bad, still happening 😕 |
Terraform Version and Provider Version
Provider Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
https://gist.github.com/n-oden/eb86c620c0c6c47866f2be2746491fe9
Expected Behavior
The Helm release should be re-created in the new cluster.
Actual Behavior
The old cluster is destroyed, the new cluster created in the new region, but the
helm_release.redis
resource is unchanged: terraform believes that it still exists, but of course it does not.Steps to Reproduce
Create the cluster and the helm release, and then force re-creation of the cluster by changing the region variable:
terraform apply -var region=us-east1
Important Factoids
I tested this running in Google Cloud Platform using GKE, but I suspect this is a more basic issue with the provider and the behavior would be the same against EKS etc.
References
Community Note
The text was updated successfully, but these errors were encountered: