-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When kubernetes_manifest is used, kubernetes provider config is invalid #1453
Comments
@cuttingedge1109 Can you plese share a bit more detail about how those variables used to in provider configuration are set themselves? Also, please include the variable declarations. End goal here is to determine how the values for those variables are produced. |
I stumbled upon this issue while looking if there was an issue similar to hashicorp/terraform-provider-kubernetes-alpha#217 in this repository, now that the I'm encountering the same error when using the output of a I can't be sure that's exactly the same usage as the OP, but it seemed similar to this issue. EDIT: Maybe my issue is actually closer to #1391. |
Did you guys manage to get it working? |
same here … the workaround we are currently working with is to use the kubectl_manifest resource from the gavinbunney/kubectl provider. Needs a bit of rewrite though … using the yamlencode function this would look something like this:
|
Same problem here |
same issue |
same issue, but I have a running eks cluster that it is failing against. Is this a known bug? |
same issue as well...
thanks @Dniwdeus , it worked for me! |
@alexsomesan why was this closed (as completed)? Version 2.13.1 still has the issue and it doesn't look like the changes on the main branch since then contain a fix for that either: v2.13.1...48d1f35. |
Hey guys we stumbled into the same kind of issue with Terraform and Kubernetes for the 4th time. I think the realisation that we came to, is that Terraform is not suitable for systems that rely on eventual consistency. So the approach that we are taking is to use a tool that can be configured with Terraform and then tool deals with the Kubernetes eventual consistency. Even if you manage to cobble together a working deployment using sleeps and other tricks, as soon as it comes to decommissioning the resources then you are in real trouble. So what we have done is leveraged the approach of https://github.com/aws-ia/terraform-aws-eks-blueprints. This is basically terraform configuring ArgoCD and then ArgoCD configuring Kubernetes resources. In our case we wrap the resources up in a helm chart and call them using the ArgoCD Application resource. We then leverage an App-Of-Apps helm chart to orchestrate all the ArgoCD Application resources and call this helm chart from Terraform. |
hi all - We closed this issue as this is something that has become a catch-all for a multitude of different related issues. We ask that if you run into further related problems, please open a new issue outlining the specifics and we will review them individually. |
For this case I'd say all related issues mentioned explain the same root issue (the That said, this issue was a duplicate of #1391 anyway. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
All resources created by
kubernetes_manifest
Terraform Configuration Files
Debug Output
Panic Output
Steps to Reproduce
terraform plan
Expected Behavior
Plane the
PodMonitor
without errorActual Behavior
Important Factoids
If I remove
kubernetes_manifest
resource, it works.References
Community Note
The text was updated successfully, but these errors were encountered: