Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_deployment diff always tries to add volume_mount #1256

Closed
tyen-brex opened this issue Apr 30, 2021 · 5 comments
Closed

kubernetes_deployment diff always tries to add volume_mount #1256

tyen-brex opened this issue Apr 30, 2021 · 5 comments
Labels

Comments

@tyen-brex
Copy link

tyen-brex commented Apr 30, 2021

Our terraform config has a kubernetes_service_account that's used in a kubernetes_deployment. The terraform plan perpetually shows a diff to mount a volume and volume_mount

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v0.12.29
Kubernetes provider version: 2.1.0
Kubernetes version: 1.16

Affected Resource(s)

Terraform Configuration Files

resource "kubernetes_service_account" "manager" {
  automount_service_account_token = true

  metadata {
    name      = local.name
    namespace = local.namespace
    labels    = local.labels
  }
}

resource "kubernetes_deployment" "manager" {
  metadata {
    name      = local.name
    namespace = local.namespace
    labels    = local.labels
  }

  spec {
    replicas = 6

    strategy {
      type = "RollingUpdate"

      rolling_update {
        max_unavailable = 1
        max_surge       = 1
      }
    }

    selector {
      match_labels = local.labels
    }

    template {
      metadata {
        name        = local.name
        namespace   = local.namespace
        labels      = local.labels
        annotations = local.annotations
      }

      spec {
        service_account_name             = kubernetes_service_account.manager.metadata.0.name
        termination_grace_period_seconds = 305
        automount_service_account_token  = true
        enable_service_links             = false

        container {
          name              = "manager"
          image             = "111111111111.dkr.ecr.us-west-2.amazonaws.com/manager"
          image_pull_policy = "Always"

          # Graceful shutdown
          lifecycle {
            pre_stop {
              exec {
                command = ["bash", "/scripts/wait-for-scripts"]
              }
            }
          }

          volume_mount {
            name       = "config"
            mount_path = "/scripts/configuration.json"
            sub_path   = "configuration.json"
            read_only  = true
          }

          volume_mount {
            mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
            name       = kubernetes_service_account.manager.default_secret_name
            read_only  = true
          }

          # High resource allocation since a lot of work is done within the manager pod
          resources {
            requests = {
              cpu    = "1"
              memory = "1Gi"
            }

            limits = {
              cpu    = "4"
              memory = "4Gi"
            }
          }
        }

        volume {
          name = "config"
          secret {
            secret_name = kubernetes_secret.manager.metadata[0].name
          }
        }

        volume {
          name = kubernetes_service_account.manager.default_secret_name
          secret {
            secret_name = kubernetes_service_account.manager.default_secret_name
          }
        }

        node_selector = {
          "role" = "manager"
        }

        toleration {
          operator = "Exists"
        }
      }
    }
  }
}


Debug Output

Panic Output

Steps to Reproduce

Expected Behavior

  1. Terraform apply adds the volume mount
  2. Next terraform apply should show no changes

Actual Behavior

Terraform apply perpetually wants to add a volume and volume mounts for the serviceaccount token

  ~ resource "kubernetes_deployment" "manager" {
        id               = "storage/manager"
        wait_for_rollout = true

        metadata {
            annotations      = {}
            generation       = 249
            labels           = {
                "app.kubernetes.io/instance"   = "manager"
                "app.kubernetes.io/managed-by" = "terraform"
                "app.kubernetes.io/name"       = "manager"
            }
            name             = "manager"
            namespace        = "storage"
            resource_version = "4834114905"
            uid              = "3ebc6261-68a2-4346-9e39-e4d9e80964b9"
        }

      ~ spec {
            min_ready_seconds         = 0
            paused                    = false
            progress_deadline_seconds = 600
            replicas                  = "6"
            revision_history_limit    = 10

            selector {
                match_labels = {
                    "app.kubernetes.io/instance"   = "manager"
                    "app.kubernetes.io/managed-by" = "terraform"
                    "app.kubernetes.io/name"       = "manager"
                }
            }

            strategy {
                type = "RollingUpdate"

                rolling_update {
                    max_surge       = "1"
                    max_unavailable = "1"
                }
            }

          ~ template {
                metadata {
                    annotations = {
                        "cluster-autoscaler.kubernetes.io/safe-to-evict" = "false"
                    }
                    generation  = 0
                    labels      = {
                        "app.kubernetes.io/instance"   = "manager"
                        "app.kubernetes.io/managed-by" = "terraform"
                        "app.kubernetes.io/name"       = "manager"
                    }
                    name        = "manager"
                    namespace   = "storage"
                }

              ~ spec {
                    active_deadline_seconds          = 0
                    automount_service_account_token  = true
                    dns_policy                       = "ClusterFirst"
                    enable_service_links             = false
                    host_ipc                         = false
                    host_network                     = false
                    host_pid                         = false
                    node_selector                    = {
                        "brexapps.io/role" = "foundation"
                    }
                    restart_policy                   = "Always"
                    service_account_name             = "manager"
                    share_process_namespace          = false
                    termination_grace_period_seconds = 305

                  ~ container {
                        args                       = []
                        command                    = []
                        image                      = "111111111111.dkr.ecr.us-west-2.amazonaws.com/manager"
                        image_pull_policy          = "Always"
                        name                       = "manager"
                        stdin                      = false
                        stdin_once                 = false
                        termination_message_path   = "/dev/termination-log"
                        termination_message_policy = "File"
                        tty                        = false

                        resources {
                            limits   = {
                                "cpu"    = "4"
                                "memory" = "2Gi"
                            }
                            requests = {
                                "cpu"    = "2"
                                "memory" = "1Gi"
                            }
                        }

                        volume_mount {
                            mount_path        = "/scripts/configuration.json"
                            mount_propagation = "None"
                            name              = "config"
                            read_only         = true
                            sub_path          = "configuration.json"
                        }
                      + volume_mount {
                          + mount_path        = "/var/run/secrets/kubernetes.io/serviceaccount"
                          + mount_propagation = "None"
                          + name              = "manager-token-5nlnp"
                          + read_only         = true
                        }
                    }

                    toleration {
                        operator = "Exists"
                    }

                    volume {
                        name = "config"

                        secret {
                            default_mode = "0644"
                            optional     = false
                            secret_name  = "manager"
                        }
                    }
                  + volume {
                      + name = "manager-token-5nlnp"

                      + secret {
                          + default_mode = "0644"
                          + secret_name  = "manager-token-5nlnp"
                        }
                    }

Important Factoids

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@tyen-brex tyen-brex added the bug label Apr 30, 2021
@dak1n1 dak1n1 assigned dak1n1 and unassigned dak1n1 May 4, 2021
@dak1n1
Copy link
Contributor

dak1n1 commented May 4, 2021

This is really similar to an issue that was fixed a little while ago for the default service account: #1096. But it looks like problem still exists for non-default service accounts. I think this PR will fix it #1235, since the value of volume_mounts will no longer be computed. We'll have to make sure to cover this case during testing.

@jrhouston
Copy link
Collaborator

@tyen-brex I notice you have automount_service_account_token set to true but also have the volume defined explicitly in your config is there a reason you're doing this? You shouldn't have to have it explicitly defined in your config if it's going to be auto-mounted anyway.

In any case, this diff here is arising because we're doing a simple regex on volume mount names to strip out service account tokens so they don't cause a diff when they've been auto-mounted. See here: https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/kubernetes/structures_pod.go#L46 We would need to update this code to check if the volume is actually defined in the config or not.

If you need this to be explicit in your config then the workaround for the moment would be to change the name so it doesn't match the regex for the default naming convention.

@tyen-brex
Copy link
Author

@jrhouston Good point about having the volume defined explicitly. Will try moving the explicit volume, thanks

@jrhouston
Copy link
Collaborator

Going to close this – please reopen if this is still an issue for you.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 13, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants