Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform kubernetes provider unable to evaluate google_container_cluster data source #30929

Closed
itsavvy-ankur opened this issue Apr 25, 2022 · 3 comments
Labels

Comments

@itsavvy-ankur
Copy link

Given terraform configuration for Composer v2 instance
when the GKE cluster is already created
and you define a data source to read the cluster configuration
and a kubernetes provider to inject a secret into the cluster
then the kubernetes provider fails to evaluate the data source config unless the cluster name is hardcoded

Terraform Version

terraform -v
Terraform v1.1.5
on linux_arm64

Terraform Configuration Files

resource "google_composer_environment" "composer" {
.....
}

data "google_container_cluster" "airflow-cluster" {

  /*
  We need to get the clustername, so split and get the last element
  */
  # name = "<<redacted_cluster_name>>".    ----> This works if statically set !!!!      
  name = element(
    split("/", google_composer_environment.composer.config[0].gke_cluster), 5
  )
  project  = var.project_id
  location = var.region
}

data "google_client_config" "provider" {}

resource "kubernetes_secret" "bigquery_operator_sa" {
...secret definition here
}

provider "kubernetes" {
  host  = "https://${data.google_container_cluster.airflow-cluster.endpoint}"
  token = data.google_client_config.provider.access_token
  cluster_ca_certificate = base64decode(
    data.google_container_cluster.airflow-cluster.master_auth[0].cluster_ca_certificate
  )
}

Debug Output

 Error: Get "http://localhost/api/v1/namespaces/default/secrets/service-account": dial tcp 127.0.0.1:80: connect: connection refused
│ 
│   with kubernetes_secret.bigquery_operator_sa,
│   on main.tf line 159, in resource "kubernetes_secret" "bigquery_operator_sa":
│  159: resource "kubernetes_secret" "bigquery_operator_sa" {

Expected Behavior

The kubernetes provider should read the value of the cluster from data source defined in google_container_cluster

Actual Behavior

Terraform errors out on the provider as it is unable to fetch the correct cluster details

Steps to Reproduce

Additional Context

I have verified that the data source configuration is correct by setting it in a output variable and statically typing the cluster name, the issue seems to exist when the cluster name is dynamically read in the data source

References

Could this be similar to hashicorp/terraform-provider-kubernetes#1028 ?

@itsavvy-ankur itsavvy-ankur added bug new new issue not yet triaged labels Apr 25, 2022
@jbardin
Copy link
Member

jbardin commented Apr 25, 2022

Hi @itsavvy-ankur,

I suspect the kubernetes provider in this case is attempting to connect in order to refresh or plan a resource, but the data.google_container_cluster.airflow-cluster data source cannot be read during the plan. There's not enough information here to accurately diagnose the problem, but since this looks like a root module my best guess if that that there is a change to google_composer_environment preventing the data source from being read until that change is complete.

If the google_composer_environment.composer.config[0].gke_cluster is known at plan-time, then you can remove one of the reasons the data source will be deferred by referencing that value indirectly through a local value (see https://www.terraform.io/language/data-sources#data-resource-dependencies).

We use GitHub issues for tracking bugs and enhancements, rather than for questions. While we can sometimes help with certain simple problems here, it's better to use the community forum where there are more people ready to help.

Thanks!

@jbardin jbardin closed this as completed Apr 25, 2022
@itsavvy-ankur
Copy link
Author

Thank you @jbardin for your comments on this. I was able to resolve it by separating the dependency between creation of composer environment and kubernetes secret and then passing the cluster name as an output instead of having to read it in the data source from a computed value.

The docs here also help validate the behaviour https://www.terraform.io/language/data-sources#data-source-lifecycle

@crw crw added question and removed bug new new issue not yet triaged labels Apr 27, 2022
@github-actions
Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants