Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"no matches for kind "ClusterIssuer" in group "cert-manager.io"" With terraform plan #3

Closed
everspader opened this issue Oct 27, 2021 · 13 comments
Labels
question Further information is requested

Comments

@everspader
Copy link

Hi, I am getting the following error when creating the helm_release for the cert-manager with the ClusterIssuer in the same terraform apply because the plan fails.

I did a bit of Googling around and it seems that it's because in the plan state, the CRDs are not yet installed so the error happens.

Is this a known issue and is there a way to circumvent it?

│ Error: Failed to determine GroupVersionResource for manifest
│ 
│   with module.k8s_base.kubernetes_manifest.cluster_issuer,
│   on ../../modules/k8s_base/main.tf line 35, in resource "kubernetes_manifest" "cluster_issuer":35: resource "kubernetes_manifest" "cluster_issuer" {
│ 
│ no matches for kind "ClusterIssuer" in group "cert-manager.io"
@bohdantverdyi bohdantverdyi added the bug Something isn't working label Nov 2, 2021
bohdantverdyi added a commit that referenced this issue Nov 2, 2021
@bohdantverdyi
Copy link
Member

Hello @everspader , are you trying to add ClusterIssuer outside of the module ?

kubernetes_manifest doesn't support to run plan & apply, when helm_release is not deployed yet.

You can use kubectl_manifest instead.
Also you can add your custom ClusterIssuer to module via cluster_issuer_yaml variable.

@bohdantverdyi bohdantverdyi added question Further information is requested and removed bug Something isn't working labels Nov 2, 2021
@benbonnet
Copy link

benbonnet commented Mar 23, 2022

@bohdantverdyi sorry to dig out this one; having the same problem. Not trying to create the cluster issuer from outside, just using :

module "cert_manager" {
  source = "terraform-iaac/cert-manager/kubernetes"
  cluster_issuer_email = "[email protected]"
  cluster_issuer_name = "cert-manager-global"
  cluster_issuer_private_key_secret_name = "cert-manager-private-key"
} 

But getting the following : cert-manager-global failed to create kubernetes rest client for update of resource: resource [cert-manager.io/v1/ClusterIssuer] isn't valid for cluster, check the APIVersion and Kind fields are valid

Am on GKE 1.21.9

@bohdantverdyi
Copy link
Member

bohdantverdyi commented Mar 23, 2022

@benbonnet looks like cert-manager wan't installed correctly. Can you try to re apply , and check if cert-manager is running in you GKE

@benbonnet
Copy link

benbonnet commented Mar 23, 2022

thx for your super quick response!

First applied got timed out (cert-manager-global failed to create kubernetes rest client for update of resource: Get "https://xx.xxx.xxx.xxx/api?timeout=32s": dial tcp xx.xxx.xxx.xxx:443: i/o timeout), although everything was running well (the certsmanager pods were all in a running state).

Re-applied, things ended well. Everything is ok

@bohdantverdyi
Copy link
Member

Have you configured kubectl provider?

@bohdantverdyi
Copy link
Member

it couldn’t connect to kube api. Problem on your side or in providers configuration ( helm, kubernetes, kubectl )

@benbonnet
Copy link

benbonnet commented Mar 23, 2022

It happened on the very first apply (first it creates the cluster, node pool, etc..., then provider "kubernetes", then module "cert_manager")

...bunch of tf code...


provider "helm" {
  kubernetes {
    host  = "https://${google_container_cluster.this.endpoint}"
    token = data.google_client_config.provider.access_token
    cluster_ca_certificate = base64decode(
      google_container_cluster.this.master_auth[0].cluster_ca_certificate,
    )
  }
}

resource "google_compute_address" "ingress_ip_address" {
  name = "${var.app_name}-ip"
}

module "nginx-controller" {
  source  = "terraform-iaac/nginx-controller/helm"

  ip_address = google_compute_address.ingress_ip_address.address
}

provider "kubernetes" {
  host  = "https://${google_container_cluster.this.endpoint}"
  token = data.google_client_config.provider.access_token
  cluster_ca_certificate = base64decode(
    google_container_cluster.this.master_auth[0].cluster_ca_certificate,
  )
}

module "cert_manager" {
  source = "terraform-iaac/cert-manager/kubernetes"
  cluster_issuer_email = "[email protected]"
  cluster_issuer_name = "cert-manager-global"
  cluster_issuer_private_key_secret_name = "cert-manager-private-key"
}

Anyhow, next applies are super smooth and everything going super fine

@bohdantverdyi
Copy link
Member

do you have helm & kubectl providers settings ?

@benbonnet
Copy link

benbonnet commented Mar 23, 2022

yup, just above kubernetes (updated above)

@bohdantverdyi
Copy link
Member

bohdantverdyi commented Mar 23, 2022

i don't see provider "kubectl"

@bohdantverdyi
Copy link
Member

bohdantverdyi commented Mar 23, 2022

anyway, next apply smooth. I think problem was with not ready nodes. Becuase kubectl cluster-issuer will not deploy if cert-manager pods it's not ready

@benbonnet
Copy link

ok my bad; confused about kubectl/kubernetes provider
will retry from scratch with all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants