-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes provider does not re-create resources on new GKE cluster. #688
Comments
I am having the same issue with Azure AKS and the kubernetes provider. In the graph I do see that the kubernetes provider depends on the AKS and that the kubernetes namespace depends on the kubernetes provider. I also tried to explicitly use depends_on to force the dependency from the namespace to the AKS, but that did not help. I am not sure though if this is an issue with the kubernetes provider or if it is a general issue in terraform itself. I am pretty sure that this used to work in the past with terraform 0.11, but I have not tried to reproduce it since my config is quite large. I am currently using terraform 0.12.18 |
Hi @twendt In my case that did not work with TF 0.11. |
We now have a guide that will show you how to re-create the Kubernetes resources on the GKE cluster. This is more of a work-around, since this behavior is not yet supported in upstream Terraform. https://github.com/hashicorp/terraform-provider-kubernetes/tree/master/_examples/gke#replacing-the-gke-cluster-and-re-creating-the-kubernetes--helm-resources Alternatively, use a separate |
Thx for into @dak1n1 - to be honest I'd rather run apply twice than fiddle with state given I have 10s of resources on GKE. |
Yeah, it's unfortunately a pretty complex problem upstream which is unlikely to resolve soon. Good call using two applies. That's the most reliable. I'll go ahead and close this issue since we won't be able to fix it on the provider side. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform Version
0.12.12
Affected Resource(s)
kubernetes_service
and all other kubernetes resources
Expected Behavior
This is a specific issue when combining google provider changes and kubernetes provider.
I have a state in this I use:
-> Google provider in order to build a GKE cluster
-> kubernetes provider to create workloads, services etc.
When I make a change to the cluster which forces a new cluster to be rebuild, kubernetes provider should notice that and re-instance resources.
Actual Behavior
In above situation, when I make a change to cluster itself, cluster gets rebuilt but kubernetes (on the same run) does not re-create configured resources.
Steps to Reproduce
Important Factoids
I use GKE on Google cloud and google and kubernetes providers in the same state.
WORKAROUND
Workaround is to do another apply of the same state once cluster is replaced... this however is annoying and breaks pipelines.
The text was updated successfully, but these errors were encountered: