-
-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kubectl_path_documents] for_each causes map cannot be determined until apply error #153
Comments
Can you maybe try a bit different pattern? I am doing similar but
locals {
files = { for fileName in fileset(path.module, "static/**/[a-z]*.yaml") : fileName => templatefile("${path.module}/${fileName}", {}) }
}
resource "kubectl_manifest" "example" {
for_each = local.files
yaml_body = each.value
} Output:
|
I'm having the same issue, where when I apply everything works, but later on, suddenly I get the same error |
@alekc - Thanks for the prompt reply! I only have one Kubernetes resource per YAML file. I also only currently have a single file, I am just about to create additional node pools and classes. To be clear, this is only occurring for me using It's possible that the first apply did work as per @erezhazan1 and then subsequent plans didn't work. This terraform project configures a lot, including the EKS cluster and all the VPC/IAM/KMS related things, and as I have been iterating, I've slowly been adding more resources to the project and fixing issues with related AWS services. Regarding your suggestion, I am effectively do what you have written using the Edit: I don't know if this is also a difference, but I am using S3 as the terraform backend with Dynamo DB locking. I would struggle to see this being an issue though as the state should be the same regardless of the backend. // Amazon Linux 2023 node classes
resource "kubectl_manifest" "al2023_node_classes" {
for_each = fileset("${abspath(path.module)}/class", "al2023*.yaml")
yaml_body = templatefile("${abspath(path.module)}/class/${each.value}", {
karpenter_node_role = var.karpenter.node_role_name
cluster_name = var.cluster.name
authorized_keys = local.authorized_keys_sh
})
}
// Node pools
resource "kubectl_manifest" "node_pools" {
for_each = fileset("${abspath(path.module)}/pool", "*.yaml")
yaml_body = file("${abspath(path.module)}/pool/${each.value}")
} 🙏 |
Not sure about the Thats the most reasonable explanation coming to my mind. |
Hi @alekc, Were you able to replicate the plan issue after apply? Initially, I find this strange, many other data sources, like terraform remote state or cloudformation outputs also have a lazy evaluation similar to this situation - where the values of the referenced attributes are not determined until apply, and they are able to pass planning, even though some other resources from a different module references them. My only guess is that the trouble is related to the fact the data source generates a map of an unknown length and unknown keys which now throws the error. I'm not sure if the data source can be updated to resolve this, I guess it is difficult when it could be the case that YAML is only generated through some other step during apply to the file system, therefore you'd want it to be dynamic and unresolved at the planning stage. The main benefit of I did find another unrelated error result when I changed the resource name inside the Kubernetes manifest, but the filename remained the same - in this case it failed to apply but passed planning using the workaround in my last comment. Looking at the
|
Preamble
This is a continuing issue for using
for_each
in thekubectl_manifest
resource when using thekubectl_path_documents
data source. This looks to never have been resolved in any version of theterraform-provider-kubectl
module. I'm wondering if there is a more deterministic way to ensure whatever it thinks is a not determined about the manifest map can be determined (like validating during plan), the terraform issue I reference at the bottom of my description states:Does this mean the "modern" Provider Framework should be adopted to avoid this issue?
Can the resource be improved to resolve during the plan phase?
Issue
I believe the issue is due to newer versions of terraform not resolving the map of manifests/documents during the plan phase. I am using Terraform v1.9.2. I am trying to deploy a
karpenter
EC2NodeClass template from a sub-directory to an EKS cluster running v1.29 of Kubernetes. We deploy the terraform project using GitLab CI, and it fails ifterraform plan
fails.My code runs inside of a sub-module for my terraform project, not at the top-level main.tf, but I wouldn't imagine this should impact things.
main.tf
I paint in some variables sourced from other modules, however, this error also occurs when no variables are being applied, I have a
karpenter
NodePool manifest file that uses the same structure as the documentation and it also suffers from the same issue.When doing a terraform plan, I get the following error:
If I try to use the count method with document attribute instead, I get a similar error:
Related Issues
There is a long history of this issue, and it seems to be related to the last issue (two links) in this list.
cannot be determined until apply
even if thedepends_on
dependency is known. hashicorp/terraform#34391Work-around
The above linked comment does work-around the issue, but needless to say, it's an ongoing issue for me regardless of applying plain manifests from a sub-directory or using variable interpolation to the manfiest file. I can get even interpolate the values using the templatefile function, so this isn't a blocker, but the documentation as provided for this module doesn't work with my current version of terraform.
The text was updated successfully, but these errors were encountered: