Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster_ipv4_cidr value ignored for google_container_cluster #4181

Closed
ghost opened this issue Aug 7, 2019 · 4 comments · Fixed by GoogleCloudPlatform/magic-modules#2164
Closed
Assignees
Labels

Comments

@ghost
Copy link

ghost commented Aug 7, 2019

This issue was originally opened by @guywald as hashicorp/terraform#22366. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform version

Terraform v0.12.6
+ provider.google v2.12.0
+ provider.google-beta v2.12.0
+ provider.null v2.1.2

Expected Behavior
Cluster pod address range to be 10.96.0.0/14

Actual Behavior
Cluster created with pod address range is 10.12.0.0/14

When running terraform apply for a change it plans to recreate cluster:

  # google_container_cluster.primary must be replaced
-/+ resource "google_container_cluster" "primary" {
      ~ additional_zones            = [] -> (known after apply)
      ~ cluster_ipv4_cidr           = "10.12.0.0/14" -> "10.96.0.0/14" # forces replacement
      ~ default_max_pods_per_node   = 110 -> (known after apply)
        description                 = "My GKE Cluster"
...

Plan: 1 to add, 0 to change, 1 to destroy.
...

Cluster resource

data "google_compute_subnetwork" "mysubnet" {
  provider = "google"
  name = "mysubnet"
  region = "us-east4"
}

resource "google_container_cluster" "primary" {
  provider = "google-beta"
  name     = "my-gke-cluster"
  description = "My GKE Cluster"
  location = "us-east4-a"

  cluster_ipv4_cidr = "10.96.0.0/14"

  cluster_autoscaling {
    enabled = false
  }

  remove_default_node_pool = true
  initial_node_count = 1

  ip_allocation_policy {
    use_ip_aliases = true
  }

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }

  subnetwork = data.google_compute_subnetwork.mysubnet.name
  enable_binary_authorization = true
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "my_node_pool"
  location   = "us-east4-a"
  cluster    = google_container_cluster.primary.name
  node_count = 1

  node_config {
    preemptible  = false
    machine_type = "n1-standard-2"
    metadata = {
      disable-legacy-endpoints = "true"
    }
    service_account = "**<sa>**"
  }
}
@rileykarson
Copy link
Collaborator

@guywald
Copy link

guywald commented Aug 8, 2019

@rileykarson, no, it works correctly with cluster_ipv4_cidr_block.
Thanks! I'd expect it should fail for this logic.

@ghost ghost removed the waiting-response label Aug 8, 2019
@rileykarson
Copy link
Collaborator

I suspect what's happening is that cluster_ipv4_cidr_block supersedes cluster_ipv4_cidr, and having an ip_allocation_policy block defined causes the unset value for cluster_ipv4_cidr_block to supersede the actually set value for cluster_ipv4_cidr.

We don't have a great remediation- I can add a note to the docs, but I'm hesitant to add a plan-time failure if ip_allocation_policy and cluster_ipv4_cidr are set because there are cases where users could have defined both (generally if ip_allocation_policy was added afterwards), and that would break otherwise valid configs.

I'd like to remove the old field in 3.0.0, filed #4203. I'll use this bug to test that my assumption is correct, and deprecate the field + add a note to the docs.

@ghost
Copy link
Author

ghost commented Sep 13, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Sep 13, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants