Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When kubernetes_manifest is used, kubernetes provider config is invalid #1453

Closed
cuttingedge1109 opened this issue Oct 12, 2021 · 13 comments
Closed
Labels

Comments

@cuttingedge1109
Copy link

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 0.15.4 (I use terraform cloud)
Kubernetes provider version: 2.5.0 (Same result for 2.4.0 and 2.3.2)
Kubernetes version: 1.20.2

Affected Resource(s)

All resources created by kubernetes_manifest

Terraform Configuration Files

terraform {
  backend "remote" {
    organization = "xxx"

    workspaces {
      name = "kubernetes-test"
    }
  }

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.5.0"
    }
  }
}

provider "kubernetes" {
  experiments {
    manifest_resource = true
  }
  host                   = var.KUBE_HOST
  cluster_ca_certificate = base64decode(var.kube_cluster_ca_cert_data)
  client_key             = base64decode(var.kube_client_key_data)
  client_certificate     = base64decode(var.kube_client_cert_data)
}

resource "kubernetes_manifest" "test" {
  manifest = {
    "apiVersion" = "monitoring.coreos.com/v1"
    "kind"       = "PodMonitor"

    "metadata" = {
      "name"      = "test"
      "namespace" = "monitoring"
    }
    "podMetricsEndpoints" = [
      {
        "interval" = "60s"
        "path"     = "metrics/"
        "port"     = "metrics"
      }
    ]
    "selector" = {
      "matchLabels" = {
        "app.kubernetes.io/component" = "test"
        "app.kubernetes.io/name"      = "test"
      }
    }
  }
}

Debug Output

Panic Output

Steps to Reproduce

terraform plan

Expected Behavior

Plane the PodMonitor without error

Actual Behavior

 Error: Failed to construct REST client
with kubernetes_manifest.test
cannot create REST client: no client config

Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"] in provider "kubernetes":
provider "kubernetes" {
'client_certificate' is not a valid PEM encoded certificate

Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"] in provider "kubernetes":
provider "kubernetes" {
'cluster_ca_certificate' is not a valid PEM encoded certificate

Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"] in provider "kubernetes":
provider "kubernetes" {
'client_key' is not a valid PEM encoded certificate

Important Factoids

If I remove kubernetes_manifest resource, it works.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@alexsomesan
Copy link
Member

@cuttingedge1109 Can you plese share a bit more detail about how those variables used to in provider configuration are set themselves? Also, please include the variable declarations.
Is this part of a module? If so, please share the module invocation block too.

End goal here is to determine how the values for those variables are produced.

@flovouin
Copy link

flovouin commented Mar 7, 2022

I stumbled upon this issue while looking if there was an issue similar to hashicorp/terraform-provider-kubernetes-alpha#217 in this repository, now that the kubernetes_manifest resource has been merged into the main provider.

I'm encountering the same error when using the output of a google_container_cluster as configuration for a kubernetes provider. If the GKE cluster is not already up and running, defining/refreshing the state (and any other action) of the kubernetes_manifest will fail. To be clear, this is not the case with any other resources of the kubernetes provider.

I can't be sure that's exactly the same usage as the OP, but it seemed similar to this issue.
I'm using version 2.8.0 of the provider with terraform 1.0.3.

EDIT: Maybe my issue is actually closer to #1391.

@Lincon-Freitas
Copy link

Did you guys manage to get it working?

@Dniwdeus
Copy link

same here … the workaround we are currently working with is to use the kubectl_manifest resource from the gavinbunney/kubectl provider. Needs a bit of rewrite though … using the yamlencode function this would look something like this:

resource "kubectl_manifest" "keycloak_db" {
  yaml_body = yamlencode({
    apiVersion = "myapi/v1"
    kind       = "myservice"
    metadata = {
      labels = {
        team = terraform.workspace
      }
      name      = "${terraform.workspace}-myapp"
      namespace = var.namespace
    }
    spec = {
      […]
    }
      resources = {
        limits = {
          cpu    = "500m"
          memory = "500Mi"
        }
        requests = {
          cpu    = "100m"
          memory = "100Mi"
        }
      }
      teamId = terraform.workspace
      volume = {
        size = "10Gi"
      }
    }
  })
}

@loeffel-io
Copy link

Same problem here

@litan1106
Copy link

same issue

@drornir
Copy link

drornir commented Jul 13, 2022

same issue, but I have a running eks cluster that it is failing against. Is this a known bug?

@wsalles
Copy link

wsalles commented Jul 14, 2022

same issue as well...
if I already have an EKS cluster created, it works! but if I'm going to create from scratch, it doesn't work!

same here … the workaround we are currently working with is to use the kubectl_manifest resource from the gavinbunney/kubectl provider. Needs a bit of rewrite though … using the yamlencode function this would look something like this:

resource "kubectl_manifest" "keycloak_db" {
  yaml_body = yamlencode({
    apiVersion = "myapi/v1"
    kind       = "myservice"
    metadata = {
      labels = {
        team = terraform.workspace
      }
      name      = "${terraform.workspace}-myapp"
      namespace = var.namespace
    }
    spec = {
      […]
    }
      resources = {
        limits = {
          cpu    = "500m"
          memory = "500Mi"
        }
        requests = {
          cpu    = "100m"
          memory = "100Mi"
        }
      }
      teamId = terraform.workspace
      volume = {
        size = "10Gi"
      }
    }
  })
}

thanks @Dniwdeus , it worked for me!
but as you said, it's a workaround until someone finds a solution.

@jeroenj
Copy link

jeroenj commented Sep 13, 2022

@alexsomesan why was this closed (as completed)? Version 2.13.1 still has the issue and it doesn't look like the changes on the main branch since then contain a fix for that either: v2.13.1...48d1f35.

@taliesins
Copy link

Hey guys we stumbled into the same kind of issue with Terraform and Kubernetes for the 4th time. I think the realisation that we came to, is that Terraform is not suitable for systems that rely on eventual consistency. So the approach that we are taking is to use a tool that can be configured with Terraform and then tool deals with the Kubernetes eventual consistency. Even if you manage to cobble together a working deployment using sleeps and other tricks, as soon as it comes to decommissioning the resources then you are in real trouble.

So what we have done is leveraged the approach of https://github.com/aws-ia/terraform-aws-eks-blueprints. This is basically terraform configuring ArgoCD and then ArgoCD configuring Kubernetes resources.

In our case we wrap the resources up in a helm chart and call them using the ArgoCD Application resource. We then leverage an App-Of-Apps helm chart to orchestrate all the ArgoCD Application resources and call this helm chart from Terraform.

@iBrandyJackson
Copy link
Member

hi all - We closed this issue as this is something that has become a catch-all for a multitude of different related issues. We ask that if you run into further related problems, please open a new issue outlining the specifics and we will review them individually.

@jeroenj
Copy link

jeroenj commented Sep 21, 2022

For this case I'd say all related issues mentioned explain the same root issue (the kubernetes_manifest resource not respecting the existence of the cluster it gets created in, unlike other kubernetes_resources).

That said, this issue was a duplicate of #1391 anyway.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests