Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update workload_metadata_config.node_metadata on node pools without force replacement. #4041

Assignees
Labels

Comments

@sho-abe
Copy link

sho-abe commented Jul 17, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.

Description

Workload Identity setting of google_container_cluster and google_container_node_pool is available in google-beta. However, enabling the Workload Identity setting on an existing cluster or node pool results in "force replacement" and will be recreated.

If modify Workload Identity setting with gcloud container clusters update and/or gcloud container node-pool update, cluster and node pool will not be re-created, and Pod and Service will remain as they are.

I would prefer not to be force replacement for changing below configurations.

  • workload_identity_config.identity_namespace in google_container_cluster
  • node_config.workload_metadata_config.node_metadata in google_container_node_pool

New or Affected Resource(s)

  • google_container_cluster
  • google_container_node_pool

Potential Terraform Configuration

resource "google_container_cluster" "workload_identity" {
  provider = "google-beta"
  name     = "workload-identity"
  location = "asia-northeast1-b"

  network    = google_compute_network.workload_identity.self_link
  subnetwork = google_compute_subnetwork.workload_identity.self_link

  min_master_version = "1.13.6"
  initial_node_count = 1

  ...(snip)...

  # add below config
  workload_identity_config {
    identity_namespace = "workload-identity.svc.id.goog"
  }
}

resource "google_container_node_pool" "workload_identity" {
  provider = "google-beta"
  cluster  = google_container_cluster.workload_identity.name
  name     = "workload-identity-pool"

  location   = google_container_cluster.workload_identity.location
  node_count = 1

  node_config {

  ...(snip)...

   # add below config
    workload_metadata_config {
      node_metadata = "GKE_METADATA_SERVER" 
    }
  }
}

The result of terraform plan is as follows.

...snip...
      + workload_identity_config { # forces replacement
          + identity_namespace = "workload-identity.svc.id.goog" # forces replacement
        }

...snip...
          + workload_metadata_config { # forces replacement
              + node_metadata = "GKE_METADATA_SERVER" # forces replacement
            }

References

@ghost ghost added the enhancement label Jul 17, 2019
@rileykarson
Copy link
Collaborator

hashicorp/terraform-provider-google-beta#896 is tackling this, there's a bit of nuance because changing this value requires modifying certain values in node pools.

@rtoma
Copy link
Contributor

rtoma commented Aug 28, 2019

Piggybacking on this issue, because the title covers my finding and the issue is still open. Feel free to ask me to create a separate issue.

--

With hashicorp/terraform-provider-google-beta#896 (we use v2.13.0) it is possible to update the google_container_cluster.NAME.node_config.workload_metadata_config parameter of a cluster resource. This however only enabled workload identity in new node pools.

When you attempt to upgrade an existing nodepool by changing the config of a google_container_node_pool.NAME.node_config.workload_metadata_config.node_metadata parameter the whole node pool is deleted and recreated:

-/+ resource "google_container_node_pool" "NAME" {
     ~ node_config {
         ~ workload_metadata_config {
             ~ node_metadata = "SECURE" -> "GKE_METADATA_SERVER" # forces replacement
           }
       }
   }

Where as gcloud can do it rolling:

$ cloud beta container node-pools update NODEPOOL --cluster=CLUSTER \
  --project=PROJECT --workload-metadata-from-node=GKE_METADATA_SERVER

A rolling reprovisioning of the nodepool nodes is of course preferred to ensure maximum pod availability.

In the current state we can not use terraform to migrate existing node pools in production without scheduled downtime of all workloads running in the node pool.

@RochesterinNYC
Copy link

So if I understand correctly, the purpose of hashicorp/terraform-provider-google-beta#896 was to change the behavior so that enabling workload identity on GKE clusters via Terraform did not result in a need for GKE cluster recreation, but it didn't change the behavior that enabling workload identity on the GKE node pools causes them to recreate?

@rileykarson
Copy link
Collaborator

That's correct @RochesterinNYC. I'll retitle this issue so it's about updating that field now that we support updating the cluster-level one.

@ghost
Copy link

ghost commented Jun 20, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jun 20, 2020
@github-actions github-actions bot added service/container forward/review In review; remove label to forward labels Jan 14, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.