Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

any good solutions for The "for_each" value depends on resource attributes that cannot be determined until apply #141

Open
yongzhang opened this issue Nov 16, 2021 · 18 comments

Comments

@yongzhang
Copy link

yongzhang commented Nov 16, 2021

I guess this is a common issue and been discussed a lot:

I have this:

data "template_file" "app" {
  template = file("templates/k8s_app.yaml")

  vars = {
    db_host = module.db.this_rds_cluster_endpoint  # whatever resources to be created
  }
}

data "kubectl_file_documents" "app" {
  content = data.template_file.app.rendered
}

resource "kubectl_manifest" "app" {
  for_each = data.kubectl_file_documents.app.manifests

  yaml_body = each.value
}

I got:

Error: Invalid for_each argument
│
│   on k8s_app.tf line 36, in resource "kubectl_manifest" "app":
│   36:   for_each = data.kubectl_file_documents.app.manifests
│     ├────────────────
│     │ data.kubectl_file_documents.app.manifests is a map of string, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target
│ argument to first apply only the resources that the for_each depends on.

Not sure if any best practices or solutions.

@avnerenv0
Copy link

At least the docs should be changed?
As it seems the current example isn't working

@conchaox
Copy link

Getting the same error, with this use case. This worked a couple of days ago with another module, note sure why is not working this way.

locals {
  git_secret_name  = "git-creds"
  okta_secret_name = "okta-creds"
}

data "kubectl_path_documents" "external_secrets" {
  pattern = "${path.module}/external-secrets.yaml"

  vars = {
    namespace  = kubernetes_namespace.namespace.metadata[0].name,
    project_id = data.google_project.project.project_id

    git_secret_name  = local.git_secret_name
    okta_secret_name = local.okta_secret_name
  }
}

resource "kubectl_manifest" "external_secrets" {
  for_each  = data.kubectl_path_documents.external_secrets.manifests
  yaml_body = each.value

  override_namespace = kubernetes_namespace.namespace.metadata[0].name
}

@yongzhang
Copy link
Author

For now I removed all of to be computed variables in vars,
Instead, I created configmaps or secrets by kubernetes provider, and then reference them in k8s manifest yaml.

@mrgasparov
Copy link

The workaround I found involves using the filesetfunction to get a count of the number of files. As an example:

data "kubectl_path_documents" "proxy_docs" {
  pattern = "${path.module}/values/proxy/*.yaml"
  vars = {
    namespace = kubernetes_namespace.proxy.id
  }
}

resource "kubectl_manifest" "proxy_manifests" {
  count     = length(fileset(path.module, "/values/proxy/*.yaml"))
  yaml_body = element(data.kubectl_path_documents.proxy_docs.documents, count.index)
}

Not perfect but seems to do the trick.

@reubenavery
Copy link

reubenavery commented Mar 25, 2022

It would really help to convert/clone these data objects into resource, this would be a clean workaround.

@vikaskoppineedi
Copy link

I have the same issue with the following code.

data "template_file" "container_insights" {
  depends_on = [
    module.eks,
    module.irsa,
    helm_release.aws_vpc_cni
  ]
  template = file("${path.module}/charts-manifests-templates/cloudwatch-insights.yaml.tpl")
  vars = {
    iam_role_arn = module.irsa.container_insights_fluentd[0].iam_role_arn
  }
}

data "kubectl_file_documents" "container_insights" {
  depends_on = [
    data.template_file.container_insights,
  ]
  content = data.template_file.container_insights.rendered
}

resource "kubectl_manifest" "container_insights" {
  depends_on = [
    data.kubectl_file_documents.container_insights,
    data.template_file.container_insights,
  ]
  for_each  = data.kubectl_file_documents.container_insights.manifests
  yaml_body = each.value
}

@mmerickel
Copy link

It's happy to plan it until you change something like add or remove files from the folder... this is insanely frustrating. :-) as @reubenavery said I have seen some providers using resources instead of data sources to work around this issue in terraform.

@mmerickel
Copy link

Does anyone know how to unblock terraform when you get into this state? Like it was working before then I removed a few files from the manifests folder an now it's angry.

@mmerickel
Copy link

mmerickel commented May 11, 2022

The workaround I found only works for kubectl_filename_list and not kubectl_file_documents. You can use the equivalent fileset function in terraform to get rid of the data source so the following:

data "kubectl_filename_list" "this" {
  pattern = "${path.module}/manifests/*.yaml"
}

resource "kubectl_manifest" "this" {
  for_each = { for k in data.kubectl_filename_list.this.matches : k => k }
  yaml_body = templatefile(each.value, {
    foo = "bar"
  })
}

can be completely replaced by:

resource "kubectl_manifest" "this" {
  for_each = fileset(path.module, "manifests/*.yaml")
  yaml_body = templatefile("${path.module}/${each.value}", {
    foo = "bar"
  })
}

Sadly this does not work for file_documents so you need to have every k8s resource in a separate file.

@sebandgo
Copy link

sebandgo commented Jul 3, 2022

Have the same issue with kubectl_manifest and I noticed that the error pops-up when you have more than two kubectl_manifest instances in your code. I have three, first two are working perfectly fine, when I add a third one, only that particular one fails, the first two will work as normal. Same code, like for like, just the vars are different.

@eytanhanig
Copy link

This is literally the recommend method for using kubectl_manifest. Is there a timeframe for fixing this bug?

@eytanhanig
Copy link

Here's a workaround I came up with:

locals {
  crds_split_doc  = split("---", file("${path.module}/crds.yaml"))
  crds_valid_yaml = [for doc in local.crds_split_doc : doc if try(yamldecode(doc).metadata.name, "") != ""]
  crds_dict       = { for doc in toset(local.crds_valid_yaml) : yamldecode(doc).metadata.name => doc }
}

resource "kubectl_manifest" "crds" {
  for_each  = local.crds_dict
  yaml_body = each.value
}

vijay-veeranki added a commit to ministryofjustice/cloud-platform-terraform-kuberhealthy that referenced this issue Oct 7, 2022
As template_file is cannot apply multiple manitest, instead use fileset

gavinbunney/terraform-provider-kubectl#141 (comment)
@devops-corgi
Copy link

Super interested to see this fixed as well. Terraform fails to work completely at random.

@zack-is-cool
Copy link

here is some interesting notes on this
https://github.com/clowdhaus/terraform-for-each-unknown

@online01993
Copy link

online01993 commented Aug 17, 2023

Here's a workaround I came up with:

locals {
  crds_split_doc  = split("---", file("${path.module}/crds.yaml"))
  crds_valid_yaml = [for doc in local.crds_split_doc : doc if try(yamldecode(doc).metadata.name, "") != ""]
  crds_dict       = { for doc in toset(local.crds_valid_yaml) : yamldecode(doc).metadata.name => doc }
}

resource "kubectl_manifest" "crds" {
  for_each  = local.crds_dict
  yaml_body = each.value
}

Thanks to eytanhanig, his solution worked for me.
But I would like to extend it by excluding the use of local variables and adding a unique ID, which will help in my case to solve the problem of non-unique names in "yamldecode(doc).metadata.name"

resource "kubectl_manifest" "k8s_kube-dashboard" {
  for_each = {
    for i in toset([
      for index, i in (split("---", templatefile("${path.module}/scripts/kube-dashboard.yml.tpl", {
        kube-dashboard_nodePort = "${var.kube-dashboard_nodePort}"
        })
      )) :
      {
        "id"  = index
        "doc" = i
      }
      #if try(yamldecode(i).metadata.name, "") != ""
    ])
    : i.id => i
  }
  yaml_body = each.value.doc
}

@mmerickel
Copy link

FWIW the "best" way I have found to replace this plugin is to define a local helm chart and use the helm_release instead.

Basically boils down to defining a folder like:

chart/
  Chart.yaml
  templates/
    custom.yaml

# Chart.yaml
apiVersion: v2
name: local-manifests
version: 0.0.0
type: application

and a resource like

resource "helm_release" "local" {
  name      = "local-manifests"
  chart     = "${path.module}/chart"
  namespace = var.namespace

  values = [
    yamlencode({
      # pass in whatever vars you want to your templates
    })
  ]
}

I'm not gonna say it's ideal to do it this way, but it supports very well loading any type of k8s yaml you want to throw at it including multi-doc yaml files, or directories full of yaml.

@davidqhr
Copy link

My solution

locals {
  prometheus_objects    = split("\n---\n", file("${path.module}/prometheus.yaml"))
  prometheus_valid_yaml = [for doc in local.prometheus_objects : doc]
  prometheus_dict       = { for doc in toset(local.prometheus_valid_yaml) : format("%s/%s/%s", yamldecode(doc).apiVersion, yamldecode(doc).kind, yamldecode(doc).metadata.name) => doc }
}

resource "kubectl_manifest" "prometheus" {
  for_each          = local.prometheus_dict
  yaml_body         = each.value
  server_side_apply = true
}

@PhilipSchmid
Copy link

My workaround:

resource "kubectl_manifest" "policies" {
  for_each  = fileset(var.policy_directory, "*.yaml")
  yaml_body = file("${var.policy_directory}/${each.value}")
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests