Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_manifest does not respect provider dependencies #1732

Closed
SizZiKe opened this issue Jun 3, 2022 · 9 comments
Closed

kubernetes_manifest does not respect provider dependencies #1732

SizZiKe opened this issue Jun 3, 2022 · 9 comments

Comments

@SizZiKe
Copy link

SizZiKe commented Jun 3, 2022

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.2.1
Kubernetes provider version: 2.11.0
Kubernetes version:  1.22

Terraform Configuration Files

as an example, the provider is instantiated with values after the cluster gets created. Thus, the kubernetes_manifest resource should not attempt to contact the cluster until the cluster is created.

provider "kubernetes" {
  alias                  = "euc1"
  host                   = module.euc1[0].cluster_auth.host
  cluster_ca_certificate = module.euc1[0].cluster_auth.cluster_ca_certificate
  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    args        = ["eks", "get-token", "--cluster-name", module.euc1[0].cluster_auth.cluster_name]
    command     = "aws"
  }
}

Affected Resource(s)

Steps to Reproduce

  1. Create an EKS Cluster (or another cloud-based kubernetes cluster) and apply a random kubernetes_manifest to said cluster in the same terraform project.
  2. Watch the provider fail because the k8s cluster is not created.
  3. If you target creation of the k8s cluster and apply, and then run terraform everything works normally.

Expected Behavior

The provider should recognize that it needs to wait until the cluster is created before trying to check if the kubernetes_manifest works.

Actual Behavior

On the plan (zero resources exist pre-plan)

│ Error: Failed to construct REST client
│ 
│   with module.euc1[0].module.metrics.kubernetes_manifest.securitygrouppolicy_prometheus_kube_state_metrics,
│   on modules/metrics/security_group_policies.tf line 49, in resource "kubernetes_manifest" "securitygrouppolicy_prometheus_kube_state_metrics":
│   49: resource "kubernetes_manifest" "securitygrouppolicy_prometheus_kube_state_metrics" {
│ 
│ cannot create REST client: no client config

Important Factoids

This only occurs when a cloud-based k8s cluster is being created on the same run as a kubernets_manifest is applied to the cluster.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@sfozz
Copy link

sfozz commented Jun 14, 2022

This was working with version 2.7.1 but can be seen as not working in 2.11.0 and 2.10.0

@gwilson185
Copy link

Using the config below, I also see the error with the kubernetes_manifest resource:

Terraform: 1.1.7
Kubernetes module : 2.7.1 & 2.11.0 -->Tested with both.

@kernel164
Copy link

I don't know why this is not been taken care till now. its very basic dependency management which terraform is good at. its a blocker for our infra automation. please fix this as soon as possible. Thanks.

@arybolovlev
Copy link
Contributor

arybolovlev commented Jun 28, 2022

Hi folks,

Could you please client.authentication.k8s.io/v1alpha1 with client.authentication.k8s.io/v1beta1 in the exec block?

provider "kubernetes" {
  ...
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    ...
  }
}

Please let me know if that works. Thank you.

@kernel164
Copy link

we don't use exec block. we use token directly as shown below (in terraform cloud)

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  token                  = data.aws_eks_cluster_auth.cluster.token
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}

@marksumm
Copy link

marksumm commented Aug 4, 2022

@arybolovlev That doesn't fix it.

@alexsomesan
Copy link
Member

Creating the cluster and kubernetes_manifest resources in the same apply operation is not supported due to the need for the provider to access the cluster API during the planning phase (hence the cluster needs to already be available).

@minherz
Copy link

minherz commented May 15, 2023

@alexsomesan maybe you can elaborate about that "need". it would be very useful to enable kubernetes_resource to support late initiation of the provider like it works with, for example, kubernetes_service resource.

@palmobar
Copy link

palmobar commented Nov 6, 2023

@alexsomesan I'm not sure why this ticket is closed, the provider should respect the dependencies, it fails even when the cluster exists.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 6, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

10 participants