Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_manifest: Error: Failed to determine GroupVersionResource for manifest #1583

Open
bailey84j opened this issue Jan 26, 2022 · 23 comments

Comments

@bailey84j
Copy link

When trying to deploy jetstack module as part of the aws elb module it fails as the api_group is a known after variable

Terraform Version, Provider Version and Kubernetes Version

Terraform v1.0.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.73.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.7.1
+ provider registry.terraform.io/hashicorp/null v3.1.0

Affected Resource(s)

  • certificate_kube_system_aws_load_balancer_serving_cert

Terraform Configuration Files

resource "kubernetes_manifest" "certificate_kube_system_aws_load_balancer_serving_cert" {
  manifest = {
    "apiVersion" = "${module.jetstack-certmanager.api_group}/v1"
    "kind"       = "Certificate"
    "metadata" = {
      "labels" = {
        "app.kubernetes.io/name" = var.name
      }
      "name"      = "aws-load-balancer-serving-cert"
      "namespace" = var.namespace
    }
    "spec" = {
      "dnsNames" = [
        "aws-load-balancer-webhook-service.kube-system.svc",
        "aws-load-balancer-webhook-service.kube-system.svc.cluster.local",
      ]
      "issuerRef" = {
        "kind" = "Issuer"
        "name" = "aws-load-balancer-selfsigned-issuer"
      }
      "secretName" = "aws-load-balancer-webhook-tls"
    }
  }


}

Debug Output

GIST

Panic Output

Steps to Reproduce

  1. terraform apply -->

Expected Behavior

Kubernetes manifest to know a key variable is known after apply and to skip in plan

Actual Behavior

provider fails

Important Factoids

None

References

None

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@bailey84j bailey84j added the bug label Jan 26, 2022
@github-actions github-actions bot removed the bug label Jan 26, 2022
@francardoso93
Copy link

Same issue here!

@alexsomesan
Copy link
Member

Hi!

I as far as I can tell, the interpolation of ${module.jetstack-certmanager.api_group} in the apiVersion attribute is at fault here. The problem is, if that resource / module isn't already present in state (being created at the same time as this resource) that value isn't yet available at the earlier stages of a plan operation, where it's in fact required by the K8s provider. This results in the value being interpolating as null, the provider trying to locate the "Certificate" resource in the "/v1" resource group, which is where only the cluster built-in resources are.

First question: is the value of module.jetstack-certmanager.api_group really dynamically generated? If not, I would advise to just use plain old text for the apiVersion value.

If yes, you have to split operations into two applies. First apply creates the module.jetstack-certmanager and the second operation creates any resources that need to interpolate values in apiVersion.

Let me know if this advice was helpful.

@dsiguero
Copy link

dsiguero commented Mar 12, 2022

Apparently it also happens when no interpolation is used in the kubernetes_service manifest. I'm getting the same Failed to determine GroupVersionResource for manifest.

env

Terraform v1.0.10
on darwin_amd64
+ provider registry.terraform.io/gavinbunney/kubectl v1.13.1
+ provider registry.terraform.io/hashicorp/google v3.90.1
+ provider registry.terraform.io/hashicorp/helm v2.4.1
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.8.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/time v0.7.2

Relevant code:

resource "kubernetes_manifest" "frontend_config" {
  manifest = {
    apiVersion = "networking.gke.io/v1"
    kind = "FrontendConfig"
    metadata = {
      name = "argocd-frontend-config"
      namespace = "argocd"
      generation = 1
    }
    spec = {
      redirectToHttps = {
        enabled = true
      }
    }
  }
}

Output:

╷
│ Error: Failed to determine GroupVersionResource for manifest
│
│   with kubernetes_manifest.frontend_config,
│   on gke-ingress.tf line 35, in resource "kubernetes_manifest" "frontend_config":
│   35: resource "kubernetes_manifest" "frontend_config" {
│
│ cannot select exact GV from REST mapper
╵

@marcofranssen
Copy link

marcofranssen commented Apr 7, 2022

Facing similar issues with CRDS.

Our scenario

  • install using helm_release crossplane.
  • install using kubernetes_manifest the aws crossplane provider
  • configure the aws_provider using kubernetes_manifest « this is failing with same error as the CRD apiVersion is not there yet.

Is this something that is resolvable in this module?

@nicraMarcin
Copy link

nicraMarcin commented Apr 9, 2022

I have the same issue.

$> terraform --version
Terraform v1.0.11
on linux_amd64
+ provider registry.terraform.io/hashicorp/kubernetes v2.10.0
resource "kubernetes_manifest" "customresourcedefinition_kubegres_kubegres_reactive_tech_io" {
  manifest = {
    "apiVersion" = "apiextensions.k8s.io/v1"
    "kind" = "CustomResourceDefinition"
    "metadata" = {
      "annotations" = {
        "controller-gen.kubebuilder.io/version" = "v0.4.1"
      }
      "creationTimestamp" = null
      "name" = "kubegres.kubegres.reactive-tech.io"
    }
    "spec" = {
      "group" = "kubegres.reactive-tech.io"
      "names" = {
        "kind" = "Kubegres"
        "listKind" = "KubegresList"
        "plural" = "kubegres"
        "singular" = "kubegres"
      }
      "scope" = "Namespaced"
      "versions" = [....]
    }
  }
}

resource "kubernetes_manifest" "kubegres_db_postgres_postgres" {
  depends_on = [
    kubernetes_manifest.customresourcedefinition_kubegres_kubegres_reactive_tech_io
  ]
  manifest = {
    "apiVersion" = "kubegres.reactive-tech.io/v1"
    "kind" = "Kubegres"
    "metadata" = {
      "name" = "postgres"
      "namespace" = kubernetes_manifest.namespace_db_postgres.manifest.metadata.name
    }
 # ....
 }
}

and this generates error:

$> terraform plan
╷
│ Error: Failed to determine GroupVersionResource for manifest
│ 
│   with kubernetes_manifest.kubegres_db_postgres_postgres,
│   on postgres-cluster.tf line 33, in resource "kubernetes_manifest" "kubegres_db_postgres_postgres":33: resource "kubernetes_manifest" "kubegres_db_postgres_postgres" {
│ 
│ no matches for kind "Kubegres" in group "kubegres.reactive-tech.io"

but when I remove/comment resource "kubernetes_manifest" "kubegres_db_postgres_postgres" resource and apply customresourcedefinition first, then add again kubegres_db_postgres_postgres it works.

@marcofranssen
Copy link

Similarly on a terraform destroy this resource still first tries to do all kind of checks. When the resource for some reason doesn't exist, the terraform destroy will fail.

@yaroslav-nakonechnikov
Copy link

i feel same pain...

my setup.
2 repos:

  • first is preparing eks and installing CRDs on eks
  • second is installing resources based on CRDs

and now i see issue:
"Error: Failed to determine GroupVersionResource for manifest"
when do terraform destroy or terraform refresh on second repo.

@ibadullaev-inc4
Copy link

ibadullaev-inc4 commented Dec 26, 2022

Same issue.

  1. Use helm_release to install CRDs
  2. install kubernetes_manifest using that CRDs
Error: Failed to determine GroupVersionResource for manifest
with kubernetes_manifest.


no matches for kind "xxxx" in group "XXXXXXX"

@pessoa
Copy link

pessoa commented Mar 3, 2023

Also experiencing the same here.

@koalalorenzo
Copy link

Having the same issue, It would be easier to deploy all at once by just "allowing to skip" CRD validation! If we could get an option in the terraform resource to skip API validation for CRD that are not yet there, it would work like a charm!

@quantumsheep
Copy link

quantumsheep commented Apr 9, 2023

Same issue with the following configuration :

resource "helm_release" "rabbit_cluster_operator" {
  name = "rabbitmq-cluster-operator"

  repository = "https://charts.bitnami.com/bitnami"
  chart      = "rabbitmq-cluster-operator"
}

resource "kubernetes_manifest" "documents_rabbitmq_operator" {
  depends_on = [helm_release.rabbit_cluster_operator]

  manifest = {
    "apiVersion" = "rabbitmq.com/v1beta1"
    "kind"       = "RabbitmqCluster"
    "metadata" = {
      "name"      = "rabbit"
      "namespace" = "default"
    }
  }
}

It could be great to add an option to the existing wait argument to wait for a named api before running creation.

jferris added a commit to thoughtbot/flightdeck that referenced this issue Apr 26, 2023
Applying Kubernetes manifests for CRDs in the same apply as the Helm
chart currently does not work in the Terraform Kubernetes provider.
Attempting to do so results in failures when looking up the CRD version:

https://github.com/thoughtbot/flightdeck/actions/runs/4810820017/jobs/8564458459

We can try doing this again once this issue is resolved:
hashicorp/terraform-provider-kubernetes#1583

This reverts commits:

- d5e8b32
- 139c284
@Tadcas
Copy link

Tadcas commented Apr 27, 2023

The same issue with argocd app of apps (argoproj.io apiVersion):

resource "helm_release" "argocd" {
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
version = "5.13.2"
create_namespace = "true"
namespace = "argocd"
lint = true
}

resource "kubernetes_secret" "argocd_secret" {
depends_on = [helm_release.argocd]
metadata {
labels = {
"argocd.argoproj.io/secret-type" = "repository"
}
name = "argocd-deployment-secret"
namespace = "argocd"
}
data = {
password = "${var.argo_cd}"
url = "https://gitlab.com/my-group/deployment.git"
username = "argocd"
}
}

resource "kubernetes_manifest" "argocd_application" {
depends_on = [kubernetes_secret.argocd_secret]
manifest = {
apiVersion = "argoproj.io/v1alpha1"
kind = "Application"
metadata = {
name = "argocd-sync"
namespace = "argocd"
}
spec = {
destination = {
namespace = "argocd"
server = "https://kubernetes.default.svc"
}
project = "default"
source = {
path = "argocd/overlays/${var.environment_name}"
repoURL = "https://gitlab.com/my-group/deployment.git"
targetRevision = "HEAD"
}
syncPolicy = {
automated = {}
}
}
}
}

│ Error: Failed to determine GroupVersionResource for manifest

│ with kubernetes_manifest.argocd_application,
│ on main.tf line 329, in resource "kubernetes_manifest" "argocd_application":
│ 329: resource "kubernetes_manifest" "argocd_application" {

│ no matches for kind "Application" in group "argoproj.io"

If resource "kubernetes_manifest" "argocd_application" is commented and I run terraform plan / apply, everything is working. Only after that I am able to terraform plan / apply resource "kubernetes_manifest" "argocd_application".
If trying to plan everything at he same time, getting provided error.

@a0s
Copy link

a0s commented Jul 28, 2023

So, no way to disable checking of crd existing at planning phase ?

@framctr
Copy link

framctr commented Sep 6, 2023

Same issue as #1367 . Please add a 👍🏻 to that issue to prioritize the request.

@ju4nmg
Copy link

ju4nmg commented Feb 14, 2024

abandonned issue?

@gianarb
Copy link

gianarb commented Mar 31, 2024

Same issue here

@noamgreen
Copy link

Same issue here

try to
wait {
condition {
type = "ContainersReady"
status = "True"
}

not working

@javierguzman
Copy link

So does anyone have a workaround for this issue? Thank you in advance and regards

bl-robinson added a commit to bl-robinson/terraform-k8s-cluster-bootstrap that referenced this issue Apr 28, 2024
@aiell0
Copy link

aiell0 commented May 2, 2024

Would love a workaround here as I just hit this as well :(

@Bharath509
Copy link

I'm also facing the same issue.

@polanjir
Copy link

polanjir commented Jun 10, 2024

Hi all, I encountered a similar problem with ClusterIssuers CRDs of Cert-manager and fixed it with HELM provider because it doesn't evaluate CRD with KubeAPI as kubernetes_manifest terraform resource. Here is my example

resource "helm_release" "cert_manager" {
  name              = "cert-manager"
  repository        = "https://charts.jetstack.io"
  chart             = "cert-manager"
  namespace         = var.cert_manager_namespace
  create_namespace  = true
  version           = var.cert_manager_release
  dependency_update = true
  values = [
    yamlencode({
      installCRDs              = true
      replicaCount             = 2
    })
  ]
}

resource "kubernetes_secret" "k8s_secret" {
  depends_on = [helm_release.cert_manager]
  for_each   = { for secret in var.secretsmanager_secrets : secret.k8s_secret_name => secret }
  metadata {
    name      = each.key
    namespace = var.cert_manager_namespace
  }
  data = {
    (each.value.secret_key) = jsondecode(data.aws_secretsmanager_secret_version.current[each.value.name].secret_string)[each.value.secret_key]
  }
}

resource "helm_release" "cluster_issuers" {
  depends_on = [helm_release.cert_manager, kubernetes_secret.k8s_secret]
  name       = "cluster-issuers"
  repository = "https://bedag.github.io/helm-charts/"
  chart      = "raw"
  version    = "2.0.0"
  namespace  = var.cert_manager_namespace

  values = [
    yamlencode({
      resources = var.cert_manager_manifests_cluster_issuers
    })
  ]
}

@meysam81
Copy link

In case anyone gets here and haven't yet figured it out...
I was facing the same issue as everyone else in this thread...

The way I solved this was to convert my YAML-encoded string into TF syntax, allowing for the dynamic values to enforce a dependency between resources and forcing the child resource to wait for the parent.

Here's the concrete example for further understanding.

I converted this resource:

resource "kubernetes_manifest" "this" {
  manifest = yamldecode(<<-EOF
  apiVersion: external-secrets.io/v1beta1
  kind: ExternalSecret
  metadata:
    name: pgpassword
    namespace: default
  spec:
    data:
      - remoteRef:
          key: ${azurerm_key_vault_secret.this.name} # <- it fails because of this not being static value
        secretKey: PGPASSWORD
    refreshInterval: 1h
    secretStoreRef:
      kind: ClusterSecretStore
      name: azure-keyvault
    EOF
  )
}

To this one:

resource "kubernetes_manifest" "this" {
  manifest = {
    apiVersion = "external-secrets.io/v1beta1"
    kind       = "ExternalSecret"
    metadata = {
      name      = "pgpassword"
      namespace = "default"
    }
    spec = {
      data = [
        {
          remoteRef = {
            key = azurerm_key_vault_secret.this.name # but this succeeds cause this is expected TF syntax
          }
          secretKey = "PGPASSWORD"
        }
      ]
      refreshInterval = "1h"
      secretStoreRef = {
        kind = "ClusterSecretStore"
        name = "azure-keyvault"
      }
    }
  }
}

And this solved it for me. 🚀

@0x5d
Copy link

0x5d commented Dec 9, 2024

For anyone still struggling with this: consider moving all resources which install CRDs (e.g. helm_release, kubernetes_manifest) to a separate Terraform module, and the other resources depending on them to another one. Apply the former first, and then the latter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests