Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes provider does not re-create resources on new GKE cluster. #688

Closed
jaceq opened this issue Nov 14, 2019 · 6 comments
Closed

Kubernetes provider does not re-create resources on new GKE cluster. #688

jaceq opened this issue Nov 14, 2019 · 6 comments
Labels
acknowledged Issue has undergone initial review and is in our work queue. enhancement needs investigation

Comments

@jaceq
Copy link

jaceq commented Nov 14, 2019

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

0.12.12

Affected Resource(s)

kubernetes_service
and all other kubernetes resources

Expected Behavior

This is a specific issue when combining google provider changes and kubernetes provider.
I have a state in this I use:
-> Google provider in order to build a GKE cluster
-> kubernetes provider to create workloads, services etc.
When I make a change to the cluster which forces a new cluster to be rebuild, kubernetes provider should notice that and re-instance resources.

Actual Behavior

In above situation, when I make a change to cluster itself, cluster gets rebuilt but kubernetes (on the same run) does not re-create configured resources.

Steps to Reproduce

  • Create a simple state that builds GKE cluster and creates some resources on kubernetes (eg. a service)
  • Make a change to the cluster that forces a new cluster resource
  • Cluster will get replaced but will be missing kubernetes resources.

Important Factoids

I use GKE on Google cloud and google and kubernetes providers in the same state.

WORKAROUND

Workaround is to do another apply of the same state once cluster is replaced... this however is annoying and breaks pipelines.

@twendt
Copy link

twendt commented Dec 13, 2019

I am having the same issue with Azure AKS and the kubernetes provider. In the graph I do see that the kubernetes provider depends on the AKS and that the kubernetes namespace depends on the kubernetes provider.
I would expect that terraform first destroys the namespace and the AKS and then rebulds them, but that is not the case. Instead it only replaces the AKS and completely ignores the dependencies.

I also tried to explicitly use depends_on to force the dependency from the namespace to the AKS, but that did not help.

I am not sure though if this is an issue with the kubernetes provider or if it is a general issue in terraform itself.

I am pretty sure that this used to work in the past with terraform 0.11, but I have not tried to reproduce it since my config is quite large.

I am currently using terraform 0.12.18

@jaceq
Copy link
Author

jaceq commented Dec 13, 2019

Hi @twendt In my case that did not work with TF 0.11.
I also agree with part that it's not 100% that it something that kubernetes provider can to, but would be nice to at lease get a word from guys about that (that would point in the right direction if this is a dead end)...
btw. please vote on this issue, maybe it will get a bit more attention :)

@jrhouston jrhouston added acknowledged Issue has undergone initial review and is in our work queue. needs investigation labels May 20, 2020
@dak1n1
Copy link
Contributor

dak1n1 commented Feb 10, 2021

We now have a guide that will show you how to re-create the Kubernetes resources on the GKE cluster. This is more of a work-around, since this behavior is not yet supported in upstream Terraform. https://github.com/hashicorp/terraform-provider-kubernetes/tree/master/_examples/gke#replacing-the-gke-cluster-and-re-creating-the-kubernetes--helm-resources

Alternatively, use a separate terraform apply for the GKE cluster and the Kubernetes resources.

@jaceq
Copy link
Author

jaceq commented Feb 11, 2021

Thx for into @dak1n1 - to be honest I'd rather run apply twice than fiddle with state given I have 10s of resources on GKE.
Also, given ticket you quoted is open since 2015... ;) my hope aren't all that high for a 'proper' solution.

@dak1n1
Copy link
Contributor

dak1n1 commented Mar 10, 2021

Yeah, it's unfortunately a pretty complex problem upstream which is unlikely to resolve soon. Good call using two applies. That's the most reliable. I'll go ahead and close this issue since we won't be able to fix it on the provider side.

@dak1n1 dak1n1 closed this as completed Mar 10, 2021
@ghost
Copy link

ghost commented Apr 9, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Apr 9, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
acknowledged Issue has undergone initial review and is in our work queue. enhancement needs investigation
Projects
None yet
Development

No branches or pull requests

5 participants