-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
any good solutions for The "for_each" value depends on resource attributes that cannot be determined until apply #141
Comments
At least the docs should be changed? |
Getting the same error, with this use case. This worked a couple of days ago with another module, note sure why is not working this way.
|
For now I removed all of |
The workaround I found involves using the
Not perfect but seems to do the trick. |
It would really help to convert/clone these |
I have the same issue with the following code.
|
It's happy to plan it until you change something like add or remove files from the folder... this is insanely frustrating. :-) as @reubenavery said I have seen some providers using resources instead of data sources to work around this issue in terraform. |
Does anyone know how to unblock terraform when you get into this state? Like it was working before then I removed a few files from the manifests folder an now it's angry. |
The workaround I found only works for data "kubectl_filename_list" "this" {
pattern = "${path.module}/manifests/*.yaml"
}
resource "kubectl_manifest" "this" {
for_each = { for k in data.kubectl_filename_list.this.matches : k => k }
yaml_body = templatefile(each.value, {
foo = "bar"
})
} can be completely replaced by: resource "kubectl_manifest" "this" {
for_each = fileset(path.module, "manifests/*.yaml")
yaml_body = templatefile("${path.module}/${each.value}", {
foo = "bar"
})
} Sadly this does not work for file_documents so you need to have every k8s resource in a separate file. |
Have the same issue with |
This is literally the recommend method for using kubectl_manifest. Is there a timeframe for fixing this bug? |
Here's a workaround I came up with: locals {
crds_split_doc = split("---", file("${path.module}/crds.yaml"))
crds_valid_yaml = [for doc in local.crds_split_doc : doc if try(yamldecode(doc).metadata.name, "") != ""]
crds_dict = { for doc in toset(local.crds_valid_yaml) : yamldecode(doc).metadata.name => doc }
}
resource "kubectl_manifest" "crds" {
for_each = local.crds_dict
yaml_body = each.value
} |
As template_file is cannot apply multiple manitest, instead use fileset gavinbunney/terraform-provider-kubectl#141 (comment)
Super interested to see this fixed as well. Terraform fails to work completely at random. |
here is some interesting notes on this |
Thanks to eytanhanig, his solution worked for me. resource "kubectl_manifest" "k8s_kube-dashboard" {
for_each = {
for i in toset([
for index, i in (split("---", templatefile("${path.module}/scripts/kube-dashboard.yml.tpl", {
kube-dashboard_nodePort = "${var.kube-dashboard_nodePort}"
})
)) :
{
"id" = index
"doc" = i
}
#if try(yamldecode(i).metadata.name, "") != ""
])
: i.id => i
}
yaml_body = each.value.doc
} |
FWIW the "best" way I have found to replace this plugin is to define a local helm chart and use the Basically boils down to defining a folder like:
and a resource like resource "helm_release" "local" {
name = "local-manifests"
chart = "${path.module}/chart"
namespace = var.namespace
values = [
yamlencode({
# pass in whatever vars you want to your templates
})
]
} I'm not gonna say it's ideal to do it this way, but it supports very well loading any type of k8s yaml you want to throw at it including multi-doc yaml files, or directories full of yaml. |
My solution
|
My workaround: resource "kubectl_manifest" "policies" {
for_each = fileset(var.policy_directory, "*.yaml")
yaml_body = file("${var.policy_directory}/${each.value}")
} |
I guess this is a common issue and been discussed a lot:
I have this:
I got:
Not sure if any best practices or solutions.
The text was updated successfully, but these errors were encountered: