Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform refresh attempts to dial localhost #546

Closed
ghost opened this issue Jul 11, 2019 · 20 comments
Closed

terraform refresh attempts to dial localhost #546

ghost opened this issue Jul 11, 2019 · 20 comments

Comments

@ghost
Copy link

ghost commented Jul 11, 2019

This issue was originally opened by @swtch1 as hashicorp/terraform#22024. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

$ terraform -v
Terraform v0.12.3
+ provider.google v2.10.0
+ provider.google-beta v2.10.0
+ provider.kubernetes v1.7.0

Terraform Configuration Files

../modules/gke_cluster

resource "google_container_cluster" "gke_cluster" {
  provider                 = "google-beta"
  project                  = var.project_id
  name                     = var.name
  description              = var.description
  location                 = var.location
  network                  = var.network
  subnetwork               = var.subnetwork
  cluster_ipv4_cidr        = var.cluster_ipv4_cidr
  logging_service          = "logging.googleapis.com/kubernetes"
  monitoring_service       = "monitoring.googleapis.com/kubernetes"
  remove_default_node_pool = true
  initial_node_count       = var.initial_node_count
  master_authorized_networks_config {
    cidr_blocks {
      cidr_block   = "207.11.1.0/24"
      display_name = "SSC Web-Proxies"
    }
    cidr_blocks {
      cidr_block   = "207.11.39.0/24"
      display_name = "ATC Web-Proxies"
    }
    cidr_blocks {
      cidr_block   = "207.11.113.0/24"
      display_name = "SSC NAT Range"
    }
    cidr_blocks {
      cidr_block   = "165.130.255.119/32"
      display_name = "QA Web-Proxy"
    }
  }
  maintenance_policy {
    daily_maintenance_window {
      # Time Specified in UTC. EDT=UTC-4, EST=UTC-5 
      start_time = "07:00"
    }
  }
  ip_allocation_policy {
    use_ip_aliases = true
  }
  private_cluster_config {
    enable_private_nodes   = var.enable_private_nodes
  }
}

resource "google_container_node_pool" "default-pool" {
  name     = "default-pool"
  cluster  = google_container_cluster.gke_cluster.name
  location = var.location
  node_config {
    machine_type = "n1-standard-4"
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]
  }
  initial_node_count = var.initial_node_count
  autoscaling {
    min_node_count = var.default_pool_min_node_count
    max_node_count = var.default_pool_max_node_count
  }
  management {
    auto_repair  = true
    auto_upgrade = true
  }
}

resource "kubernetes_namespace" "namespace" {
  count = length(var.namespaces)
  metadata {
    name   = var.namespaces[count.index].name
    labels = var.namespaces[count.index].labels
  }
}

#Send GKE Logs to BigQuery
resource "google_bigquery_dataset" "dataset" {
  dataset_id  = "GKE_LOGS"
  description = "Dataset used to store GKE Logs"
  location    = "US"
  labels = {
    team    = "sre",
    purpose = "logs"
  }
  access {
    role          = "WRITER"
    user_by_email = "[email protected]"
  }
  lifecycle {
    ignore_changes = [access]
  }
}

resource "google_logging_project_sink" "log_sink" {
  name                   = "gke_logs"
  destination            = "bigquery.googleapis.com/projects/${var.project_id}/datasets/${google_bigquery_dataset.dataset.dataset_id}"
  filter                 = "resource.labels.cluster_name=\"${google_container_cluster.gke_cluster.name}\""
  unique_writer_identity = false
}

resource "google_logging_project_exclusion" "log_exclusion" {
  name        = "gke_logs"
  description = "Exclude all GKE logs"
  filter      = "resource.labels.cluster_name=\"${google_container_cluster.gke_cluster.name}\""
}

variable "project_id" {
  description = "GCP project ID. See all accessible project IDs with `gcloud projects list` (required)"
}

variable "name" {
  description = "(Required) Cluster name. ref: https://www.terraform.io/docs/providers/google/r/container_cluster.html#name"
}

variable "description" {
  description = "Description of the cluster."
}

variable "location" {
  description = "Cluster location. ref: https://www.terraform.io/docs/providers/google/r/container_cluster.html#location.  Use https://cloud.google.com/compute/docs/regions-zones/ to find valid zones."
  default     = "us-east1-b"
}

variable "network" {
  description = "VPC network for the cluster nodes. https://www.terraform.io/docs/providers/google/r/container_cluster.html#network"
  default     = null
}

variable "subnetwork" {
  description = "https://www.terraform.io/docs/providers/google/r/container_cluster.html#subnetwork"
  default     = null
}

variable "cluster_ipv4_cidr" {
  description = "Referenced in the Kubernetes console as 'pod address range.'. https://www.terraform.io/docs/providers/google/r/container_cluster.html#cluster_ipv4_cidr"
  default     = null
}

variable "enable_private_nodes" {
  description = "https://www.terraform.io/docs/providers/google/r/container_cluster.html#enable_private_nodes"
  default     = false
}

variable "initial_node_count" {
  description = "https://www.terraform.io/docs/providers/google/r/container_cluster.html#initial_node_count"
  default     = 1
}

variable "default_pool_min_node_count" { # TODO: this will likely need to be refactored into an object so we can create several node pools
  description = "https://www.terraform.io/docs/providers/google/r/container_node_pool.html#min_node_count"
  default     = 1
}

variable "default_pool_max_node_count" { # TODO: this will likely need to be refactored into an object so we can create several node pools
  description = "https://www.terraform.io/docs/providers/google/r/container_node_pool.html#max_node_count"
  default     = 3
}

variable "namespaces" {
  type = list(object({
    name = string,
    labels = object({
      team    = string,
      purpose = string
    })
  }))
  description = "List of cluster namespaces and associated properties like labels."
  default     = []
}

main.tf

#Variable Declarations
variable "project_id" {
  description = "GCP project ID. See all accessible project IDs with `gcloud projects list` (required)"
  type        = "string"
}

#Resource Definitions
provider "google" {
  version = "~> 2.10.0"
  project = var.project_id
}

provider "google-beta" {
  version = "~> 2.10.0"
  project = var.project_id
}

data "google_client_config" "default" {}

terraform {
  backend "gcs" {
    bucket = "com-tf-state"
    prefix = "np-com-internal" # TODO: this really should be np-com-internal-thd, but this is a breaking change that needs to be specially handled
  }
}

provider "kubernetes" {
  version                = "1.7" # provider version, not Kubernetes version
  host                   = "https://${module.common_gke_cluster.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(module.common_gke_cluster.cluster_ca_certificate)
  load_config_file       = false
}

module "common_gke_cluster" {
  source                      = "../modules/gke_cluster"
  project_id                  = var.project_id
  name                        = "common-east"
  description                 = "Shared cluster in East region for generalized workloads in lower lifecycle."
  location                    = "us-east1"
  enable_private_nodes        = true
  network                     = "vpc-cassandra"
  subnetwork                  = "cassandra-east-np"
  default_pool_min_node_count = 2
  default_pool_max_node_count = 4
  namespaces = [
    {
      name   = "prometheus",
      labels = { team = "sre", purpose = "application_monitoring" }
    },
    {
      name   = "debug",
      labels = { team = "sre", purpose = "cluster_debugging" }
    },
  ]
}

Debug Output

terraform refresh trace

Expected Behavior

I expected Terraform to refresh the state.

Actual Behavior

Error: Get http://localhost/api/v1/namespaces/prometheus: dial tcp 127.0.0.1:80: connect: connection refused
Error: Get http://localhost/api/v1/namespaces/debug: dial tcp 127.0.0.1:80: connect: connection refused

Steps to Reproduce

terraform refresh

Additional Context

The two resources in the error (/namespaces/prometheus and /namespaces/debug/) are namespaces for my Kubernetes cluster.

@tjhiggins
Copy link

👍

@jsmichaels
Copy link

jsmichaels commented Jul 24, 2019

I'm having a similar issue when trying to terraform import a configmap. It's able to import successfully, but then tries to go to localhost when refreshing the data. Nowhere in any config is localhost specified. Edit: Note that I am able to create configmaps, deployments and other Kubernetes resources without issue.

$ terraform import kubernetes_config_map.kube_dns kube-system/kube-dns
kubernetes_config_map.kube_dns: Importing from ID "kube-system/kube-dns"...
kubernetes_config_map.kube_dns: Import complete!
  Imported kubernetes_config_map
kubernetes_config_map.kube_dns: Refreshing state... [id=kube-system/kube-dns]

Error: Get https://localhost/api/v1/namespaces/kube-system/configmaps/kube-dns: dial tcp [::1]:443: connect: connection refused

@swtch1
Copy link

swtch1 commented Aug 5, 2019

After dealing with this error on multiple occasions over time, I've come to understand it better. At its core this is a dependency issue. When the cluster does not exist terraform cannot understand how to handle this namespace resource which it cannot refresh. Despite adding a depends_on to the kubernetes_namespace resource I still get this error from time to time when applying changes, especially when those changed mean my cluster must be destroyed. As I understand it from the documentation dependencies only refer to the application order, and do not take into consideration whether a depended on resource exists.

For those still dealing with this, the workaround I generally apply looks something like this:

$ terraform state list
...
module.common_gke_cluster_east.kubernetes_namespace.namespace[0]
module.common_gke_cluster_east.kubernetes_namespace.namespace[1]
...

$ terraform state rm module.common_gke_cluster_east.kubernetes_namespace.namespace[0]
$ terraform state rm module.common_gke_cluster_east.kubernetes_namespace.namespace[1]
$ terraform apply

Sorry I don't have better information on exactly when this happens vs when it just works.

@paultyng
Copy link
Contributor

paultyng commented Aug 5, 2019

This seems like the upstream progressive apply issue: hashicorp/terraform#4149

You cannot currently (reliably) chain together a provider's config with the output of a resource.

@paultyng paultyng closed this as completed Aug 5, 2019
@slancio
Copy link

slancio commented Jan 7, 2020

I'm also getting this issue when trying to import a kubernetes_namespace and I even have that happen after hardcoding the host, token and cluster_ca_certificate values in the kubernetes provider. Is this really related to hashicorp/terraform#4149

2020-01-07T15:36:51.263-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: 2020/01/07 15:36:51 [DEBUG] Enabling HTTP requests/responses tracing
2020/01/07 15:36:51 [TRACE] [walkImport] Exiting eval tree: provider.kubernetes
2020/01/07 15:36:51 [TRACE] vertex "provider.kubernetes": visit complete
2020/01/07 15:36:51 [TRACE] dag/walk: visiting "kubernetes_namespace.this"
2020/01/07 15:36:51 [TRACE] dag/walk: visiting "kubernetes_namespace.this[\"schema\"] (import id \"schema\")"
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_namespace.this": starting visit (*terraform.NodeAbstractResource)
2020/01/07 15:36:51 [TRACE] dag/walk: visiting "kubernetes_storage_class.zonal_ssd"
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_namespace.this[\"schema\"] (import id \"schema\")": starting visit (*terraform.graphNodeImportState)
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_storage_class.zonal_ssd": starting visit (*terraform.NodeAbstractResource)
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_namespace.this": visit complete
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_storage_class.zonal_ssd": visit complete
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_namespace.this[\"schema\"] (import id \"schema\")": evaluating
2020/01/07 15:36:51 [TRACE] [walkImport] Entering eval tree: kubernetes_namespace.this["schema"] (import id "schema")
2020/01/07 15:36:51 [TRACE] dag/walk: visiting "kubernetes_config_map.dns_domains"
2020/01/07 15:36:51 [TRACE] <root>: eval: *terraform.EvalSequence
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_config_map.dns_domains": starting visit (*terraform.NodeAbstractResource)
2020/01/07 15:36:51 [TRACE] <root>: eval: *terraform.EvalGetProvider
2020/01/07 15:36:51 [TRACE] <root>: eval: *terraform.EvalImportState
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_config_map.dns_domains": visit complete
2020/01/07 15:36:51 [TRACE] GRPCProvider: ImportResourceState
kubernetes_namespace.this["schema"]: Importing from ID "schema"...
kubernetes_namespace.this["schema"]: Import prepared!
  Prepared kubernetes_namespace for import
kubernetes_namespace.this["schema"]: Refreshing state... [id=schema]
2020/01/07 15:36:51 [TRACE] EvalImportState: import kubernetes_namespace.this["schema"] "schema" produced instance object of type kubernetes_namespace
2020/01/07 15:36:51 [TRACE] [walkImport] Exiting eval tree: kubernetes_namespace.this["schema"] (import id "schema")
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_namespace.this[\"schema\"] (import id \"schema\")": expanding dynamic subgraph
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_namespace.this[\"schema\"] (import id \"schema\")": entering dynamic subgraph
2020/01/07 15:36:51 [TRACE] dag/walk: updating graph
2020/01/07 15:36:51 [TRACE] dag/walk: added new vertex: "import kubernetes_namespace.this[\"schema\"] result"
2020/01/07 15:36:51 [TRACE] dag/walk: visiting "import kubernetes_namespace.this[\"schema\"] result"
2020/01/07 15:36:51 [TRACE] vertex "import kubernetes_namespace.this[\"schema\"] result": starting visit (*terraform.graphNodeImportStateSub)
2020/01/07 15:36:51 [TRACE] vertex "import kubernetes_namespace.this[\"schema\"] result": evaluating
2020/01/07 15:36:51 [TRACE] [walkImport] Entering eval tree: import kubernetes_namespace.this["schema"] result
2020/01/07 15:36:51 [TRACE] <root>: eval: *terraform.EvalSequence
2020/01/07 15:36:51 [TRACE] <root>: eval: *terraform.EvalGetProvider
2020/01/07 15:36:51 [TRACE] <root>: eval: *terraform.EvalRefresh
2020/01/07 15:36:51 [TRACE] GRPCProvider: ReadResource
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: 2020/01/07 15:36:51 [INFO] Checking namespace schema
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: 2020/01/07 15:36:51 [DEBUG] Kubernetes API Request Details:
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: ---[ REQUEST ]---------------------------------------
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: GET /api/v1/namespaces/schema HTTP/1.1
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: Host: localhost
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: User-Agent: HashiCorp/1.0 Terraform/0.12.18
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: Accept: application/json, */*
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: Authorization: Bearer <OMITTED>
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: Accept-Encoding: gzip
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: 
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: 
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: -----------------------------------------------------
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: 2020/01/07 15:36:51 [DEBUG] Received error: &url.Error{Op:"Get", URL:"http://localhost/api/v1/namespaces/schema", Err:(*net.OpError)(0xc000160960)}
2020-01-07T15:36:51.265-0500 [DEBUG] plugin.terraform-provider-kubernetes_v1.10.0_x4: 2020/01/07 15:36:51 [INFO] Namespace schema exists
2020/01/07 15:36:51 [ERROR] <root>: eval: *terraform.EvalRefresh, err: Get http://localhost/api/v1/namespaces/schema: dial tcp [::1]:80: connect: connection refused
2020/01/07 15:36:51 [ERROR] <root>: eval: *terraform.EvalSequence, err: Get http://localhost/api/v1/namespaces/schema: dial tcp [::1]:80: connect: connection refused
2020/01/07 15:36:51 [TRACE] [walkImport] Exiting eval tree: import kubernetes_namespace.this["schema"] result
2020/01/07 15:36:51 [TRACE] vertex "import kubernetes_namespace.this[\"schema\"] result": visit complete
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_namespace.this[\"schema\"] (import id \"schema\")": dynamic subgraph encountered errors
2020/01/07 15:36:51 [TRACE] vertex "kubernetes_namespace.this[\"schema\"] (import id \"schema\")": visit complete
2020/01/07 15:36:51 [TRACE] dag/walk: upstream of "provider.kubernetes (close)" errored, so skipping
2020/01/07 15:36:51 [TRACE] dag/walk: upstream of "root" errored, so skipping

Error: Get http://localhost/api/v1/namespaces/schema: dial tcp [::1]:80: connect: connection refused

@paulalex
Copy link

@slancio Did you make any progress with this? I am getting the same issue trying to upgrade to v8.0.0 of the eks module

@slancio
Copy link

slancio commented Jan 16, 2020

@paulalex None at all. We're working around the problem by not importing resources and deleting and recreating via terraform if we have to.

@paulalex
Copy link

@slancio I think mine might be unrelated as if I set load_config_file to true and set KUBECONFIG from the terminal it works..

Thanks for coming back to me.

@slancio
Copy link

slancio commented Jan 16, 2020

I'd tried setting KUBECONFIG without any luck but I didn't try the load_config_file flag. Will give that a go.

@hazcod
Copy link
Contributor

hazcod commented Jan 26, 2020

I am having this issue when running terraform from state, I have no kubeconfig on disk.
Suggestions?

@hazcod
Copy link
Contributor

hazcod commented Feb 9, 2020

@paulalex
Copy link

paulalex commented Feb 9, 2020

@hazcod I am not really an expert on this but I was seeing this error when there was no kubeconfig passed to the provider and nothing in ~/.kube, but I dont know if this is related to your issue.

I had numerous issues and so I load the kubeconfig by setting load_config_file to true and exporting KUBECONFIG in my jenkins build, and downloading the kube config file from S3.

@hazcod
Copy link
Contributor

hazcod commented Feb 9, 2020

I’m afraid the kubeconfig is passed solely as a Terraform variable, it does not touch disk

@hazcod
Copy link
Contributor

hazcod commented Feb 10, 2020

To add: I also tried with load_config_file = true & config_path = "kubeconfig" with kubeconfig as a resource, but same issue.

@paulalex
Copy link

I didnt try this but what worked for me was setting load_config_file = true and then exporting KUBECONFIG=my_config_path.

@hazcod
Copy link
Contributor

hazcod commented Feb 10, 2020

So the kubeconfig is already present in the state file and does not exist on disk separately, so not sure if that's applicable?

@vfiset
Copy link

vfiset commented Apr 3, 2020

@hazcod did you end up working around this with kubeconfig ? looking at your GH actions seems like you are now able to plan. Wondering if you did anything special to make it work ?

@hazcod
Copy link
Contributor

hazcod commented Apr 3, 2020

I ended up moving away from the kubernetes terraform provider..

@vfiset
Copy link

vfiset commented Apr 3, 2020

@hazcod ok that sucks. thanks for coming back to me

@ghost
Copy link
Author

ghost commented Apr 21, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants