Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes provider does not respect data when kubernetes_manifest is used #1391

Open
okgolove opened this issue Aug 31, 2021 · 38 comments
Open
Labels
acknowledged Issue has undergone initial review and is in our work queue. bug manifest progressive apply upstream-terraform

Comments

@okgolove
Copy link

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.0.5
Kubernetes provider version: v2.4.1
Kubernetes version: 1.20.8-gke.900

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

data "google_client_config" "this" {}

data "google_container_cluster" "this" {
  name     = "my-cluster"
  location = "europe-west2"
  project  = "my-project"
}

provider "kubernetes" {
  token                  = data.google_client_config.this.access_token
  host                   = data.google_container_cluster.this.endpoint
  cluster_ca_certificate = base64decode(data.google_container_cluster.this.master_auth.0.cluster_ca_certificate)

  experiments {
    manifest_resource = true
  }
}

resource "kubernetes_manifest" "test-crd" {
  manifest = {
    apiVersion = "apiextensions.k8s.io/v1"
    kind       = "CustomResourceDefinition"

    metadata = {
      name = "testcrds.hashicorp.com"
    }

    spec = {
      group = "hashicorp.com"

      names = {
        kind   = "TestCrd"
        plural = "testcrds"
      }

      scope = "Namespaced"

      versions = [{
        name    = "v1"
        served  = true
        storage = true
        schema = {
          openAPIV3Schema = {
            type = "object"
            properties = {
              data = {
                type = "string"
              }
              refs = {
                type = "number"
              }
            }
          }
        }
      }]
    }
  }
}

Debug Output

Debug log contains lots of private information. I'd prefer to not to post it.

Steps to Reproduce

  1. terraform apply

Expected Behavior

Plan is presented, after apply CRD is created successfully

Actual Behavior

Error:

Invalid attribute in provider configuration

  with provider["registry.terraform.io/hashicorp/kubernetes"],
  on main.tf line 9, in provider "kubernetes":
   9: provider "kubernetes" {

'host' is not a valid URL

╷
│ Error: Failed to construct REST client
│
│   with kubernetes_manifest.test-crd,
│   on main.tf line 19, in resource "kubernetes_manifest" "test-crd":
│   19: resource "kubernetes_manifest" "test-crd" {
│
│ cannot create REST client: no client config

Important Factoids

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@okgolove okgolove added the bug label Aug 31, 2021
@Jasstkn
Copy link

Jasstkn commented Aug 31, 2021

Hi. Same issue

@sagikazarmark
Copy link

It doesn't work with depends_on either.

@ashtonian
Copy link

started running into the following error which I think is related on destroy, didn't work with tostring() either:

│ Error: Provider configuration: failed to assert type of element in 'args' value
│
│   with module.services_tools.provider["registry.terraform.io/hashicorp/kubernetes"],
│   on ../../modules/services_tools/versions.tf line 23, in provider "kubernetes":
│   23: provider "kubernetes" {
// this is required in order to pass information to the underlying kube provider for the above eks see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1280
provider "kubernetes" {
  experiments {
    manifest_resource = true
  }
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
    command     = "aws"
  }
}

@nyurik
Copy link

nyurik commented Oct 11, 2021

Same error when using GCP and applying multiple manifests from the same file -- │ Error: Failed to construct REST client:

  • Terraform 1.0.8
  • kubernetes provider 2.5.0
data "google_client_config" "current" {}

data "google_container_cluster" "cluster" {
  name     = var.cluster_name
  location = var.cluster_location
}

provider "kubernetes" {
  host = data.google_container_cluster.cluster.endpoint

  client_certificate     = base64decode(data.google_container_cluster.cluster.master_auth.0.client_certificate)
  client_key             = base64decode(data.google_container_cluster.cluster.master_auth.0.client_key)
  cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)

  token = data.google_client_config.current.access_token

  experiments {
    manifest_resource = true
  }
}

resource "kubernetes_manifest" "default" {
  # Create a map { "kind--name" => yaml_doc } from the multi-document yaml text.
  # Each element is a separate kubernetes resource.
  # Must use \n---\n to avoid splitting on strings and comments containing "---".
  # YAML allows "---" to be the first and last line of a file, so make sure
  # raw yaml begins and ends with a newline.
  # The "---" can be followed by spaces, so need to remove those too.
  # Skip blocks that are empty or comments-only in case yaml began with a comment before "---".
  for_each = {
    for value in [
      for yaml in split(
        "\n---\n",
        "\n${replace(file("manifests.yaml"), "/(?m)^---[[:blank:]]+$/", "---")}\n"
      ) :
      yamldecode(yaml)
      if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
    ] : "${value["kind"]}--${value["metadata"]["name"]}" => value
  }
  manifest = each.value
}

@rvillane
Copy link

When using kubernetes provider v2.6.1 and terraform v1.x.x, the error shown is the following:

Invalid attribute in provider configuration

  with provider["registry.terraform.io/hashicorp/kubernetes"],
  on provider.tf line 24, in provider "kubernetes":
  24: provider "kubernetes" {

'host' is not a valid URL

@tclift
Copy link

tclift commented Nov 1, 2021

The error:

'host' is not a valid URL

is likely because:

host = data.google_container_cluster.this.endpoint

should have been (as per #1468):

host = "https://${data.google_container_cluster.this.endpoint}"

but:

cannot create REST client: no client config

is happening for me despite host being a URL, and I'm not sure where to look next to diagnose.

Edit:
Seen in logs (TF_LOG=TRACE terraform apply):

2021-11-01T17:16:22.257+1100 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: 2021-11-01T17:16:22.256+1100 [ERROR] [Configure]: Failed to load config:="&{0xc001212820 0xc0007e6fc0 <nil> 0xc000176c00 {0 0} 0xc001211f30}"

so it looks like this code path is being taken. I noted the comment:

// this is a terrible fix for if the configuration is a calculated value

so perhaps clientConfig is expected to be populated elsewhere, later on...

@tclift
Copy link

tclift commented Nov 3, 2021

This may have been evident from the issue title, but those looking for a workaround can remove dynamic/data values from the provider configuration.

E.g., given a suitably configured kubectl environment, replacing:

provider "kubernetes" {
  host                   = "https://${data.google_container_cluster.default.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.default.master_auth.0.cluster_ca_certificate)
}

with:

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "gke_my-project_my-region_my-cluster"
}

@ismailyenigul
Copy link

Getting Failed to construct REST client when I try to deploy argocd app on non-existent EKS cluster.
But it works fine on running EKS cluster.

│ Error: Failed to construct REST client
│ 
│   with module.argocd_application_gitops.kubernetes_manifest.argo_application,
│   on .terraform/modules/argocd_application_gitops/main.tf line 1, in resource "kubernetes_manifest" "argo_application":
│    1: resource "kubernetes_manifest" "argo_application" {

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}


data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}


provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}


provider "helm" {

  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}


module "eks" {

...
}

module "argocd_application_gitops" {

  depends_on = [module.vpc, module.eks, module.eks_services]
  source     = "project-octal/argocd-application/kubernetes"
  version    = "2.0.0"

  argocd_namespace    = var.argocd_k8s_namespace
  destination_server  = "https://kubernetes.default.svc"
  project             = var.argocd_project_name
  name                = "gitops"
  namespace           = "myns"
  repo_url            = var.argocd_root_gitops_url
  path                = "Chart"
  chart               = ""
  target_revision     = "master"
  automated_self_heal = true
  automated_prune     = true
}

@vasylenko
Copy link

Apparently, the helm provider (when configured in the same way) does not have this issue. So I can have the helm resources described in TF when the cluster does not exist. But I can't have the k8s manifest TF code in the project until the cluster is created.

It would be great to see the issue with Failed to construct REST client for the Kubernetes provider solved soon! 🤞

@barantomasz83
Copy link

Same problem with cert-manager:

Error: Failed to construct REST client

│ with module.eks_cluster_first.module.cert_manager.kubernetes_manifest.cluster_issuer_selfsigned,
│ on modules\cert_manager\cert_manager.tf line 89, in resource "kubernetes_manifest" "cluster_issuer_selfsigned":
│ 89: resource "kubernetes_manifest" "cluster_issuer_selfsigned" {

│ cannot create REST client: no client config

@sidh
Copy link

sidh commented Jan 17, 2022

Same issue here. Serious blocker for us. :(

@DrEsteban
Copy link

Still seeing this on provider version 2.10.0

@edlevin6612
Copy link

I ended up moving my kubernetes_manifest resources to another Terraform project invoked after the cluster is created but definitely not ideal.

@SizZiKe
Copy link

SizZiKe commented Jun 3, 2022

how is this still an issue? Still affected.

@FR-Solution
Copy link

The problem is actual, a big request to fix it.

@luis-guimaraes-exoawk
Copy link

Still an issue, please fix this

@manan
Copy link

manan commented Oct 24, 2022

+1

@chengleqi
Copy link

Same here.

@5imun
Copy link

5imun commented Nov 8, 2022

+1 this is significant problem

@odee30
Copy link

odee30 commented Dec 18, 2022

+1 - Even occurs if I try and run a plan using -target to try to deploy the cluster first

@nagidocs
Copy link

nagidocs commented Jan 5, 2023

Still an issue with TF Plan when cluster is not yet present!

@tungavaso
Copy link

same here

@rpressiani
Copy link

+1

@Lazzu
Copy link

Lazzu commented Feb 16, 2023

I have this issue as well

@vespian
Copy link

vespian commented Feb 28, 2023

Same here, 1.5 year and counting.

@amreshh
Copy link

amreshh commented Apr 2, 2023

Also running into this issue, since I have a custom resource I want to use the kubernetes_manifest resource, however according to the documentation:

This resource requires API access during planning time. This means the cluster has to be accessible at plan time and thus cannot be created in the same apply operation.

@chudyandrej
Copy link

+1

@hguermeur
Copy link

Same issue here :
Error: Failed to construct REST client
and
cannot create REST client: no client config

@caracostea
Copy link

Same...

Failed to construct REST client

cannot create REST client: no client config

@marcinprzybysz86
Copy link

Still an issue!
cannot create AWS infra and all related in new empty account because EKS cluster does not yet exists, even though I have dependencies.
Thats silly!

@schoenenberg
Copy link

I don't want to post another +1 here, but I do have the same issue when trying to deploy a certmanager Issuer.

How can we get the attention of the maintainers here? This issue is open for almost two years affecting many users..

@MonicaMagoniCom
Copy link

I'm experiencing the same issue. And also many others related to Kubernetes provider :(

@luigi-bitonti
Copy link

@jrhouston can you help us with this issue?

@dmajano
Copy link

dmajano commented Dec 27, 2023

+1

@jackspirou
Copy link

still an issue +1

@alexsomesan
Copy link
Member

The kubernetes_manifest resource requires the cluster to be present when planning such resources. Because of this, applying the cluster and kubernetes_manifest resources in the same Terraform run is not supported at the moment.

This is documented in the "before you use" section of the resource documentation.

We are exploring solutions to this, but they require changes to Terraform itself and the underlying provider SDKs so we can't anticipate when one will become available.

The recommendation remains to split the configuration into two apply operations: a first one to create the cluster and it's infrastructure and a second one to create the Kubernetes resources.

AndreasZeissner added a commit to wundergraph/terraform-provider-cosmo that referenced this issue Oct 14, 2024
the kubernetes provider might fail on building the rest client for k8s
under various circumstances e.g. hashicorp/terraform-provider-kubernetes#1391
@autarchprinceps
Copy link

But why does this work with non _manifest resources then? They can be created in the same apply, while setting up the provider from module outputs or the likes. If this was a fundamental issue in not being able to setup the provider from settings only know after applying resources, they would be just as broken.
Obligatory "still a massive issue, please fix".

@ryanpeach
Copy link

The kubernetes_manifest resource requires the cluster to be present when planning such resources. Because of this, applying the cluster and kubernetes_manifest resources in the same Terraform run is not supported at the moment.

This is documented in the "before you use" section of the resource documentation.

We are exploring solutions to this, but they require changes to Terraform itself and the underlying provider SDKs so we can't anticipate when one will become available.

The recommendation remains to split the configuration into two apply operations: a first one to create the cluster and it's infrastructure and a second one to create the Kubernetes resources.

My company has a policy to never do multi-apply root configs. And I agree with that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
acknowledged Issue has undergone initial review and is in our work queue. bug manifest progressive apply upstream-terraform
Projects
None yet
Development

No branches or pull requests