Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform show changes for launch configurations out of no where #24087

Closed
Krishna1408 opened this issue Feb 12, 2020 · 2 comments
Closed

Terraform show changes for launch configurations out of no where #24087

Krishna1408 opened this issue Feb 12, 2020 · 2 comments

Comments

@Krishna1408
Copy link

Krishna1408 commented Feb 12, 2020

Terraform Version

Terraform v0.12.13

Terraform Configuration Files

terraform {
  required_version = ">= 0.12.0"
}

provider "aws" {
  region  = var.region
}

provider "random" {
  version = "~> 2.1"
}

provider "local" {
  version = "~> 1.2"
}

provider "null" {
  version = "~> 2.1"
}

provider "template" {
  version = "~> 2.1"
}

resource "random_string" "suffix" {
  length  = 8
  special = false
}
module "eks" {
  source = "../../../modules/aws/terraform-eks"

  env             = local.env
  cluster_name    = local.cluster_name
  cluster_version = 1.14

  subnets = module.vpc.private_subnets
  vpc_id  = module.vpc.vpc_id

  worker_groups = [
    {
      name                 = "terraform-1"
      instance_type        = "t3.large"
      key_name             = "eks-dev"
      bootstrap_extra_args = "--enable-docker-bridge true"
      kubelet_extra_args   = "--node-labels=env=Test,service=docker,team=platform"
      node_group_k8s_labels = {
        Environment = "Test"
        Team        = "Platform"
        Service     = "Docker"
      }
      asg_max_size = "0" # Maximum worker capacity in the autoscaling group.
      asg_min_size = "0"
    },
    {
      name                 = "admin-worker-group-1"
      instance_type        = "t3.xlarge"
      key_name             = "eks-dev"
      bootstrap_extra_args = "--enable-docker-bridge true"
      kubelet_extra_args   = "--node-labels=env=Dev,service=ci-cd,team=platform"
      node_group_k8s_labels = {
        Environment = "Admin"
        Team        = "Platform"
      }
      asg_max_size = "10" # Maximum worker capacity in the autoscaling group.
      asg_min_size = "1"
    },
    ]

  map_roles    = var.map_roles
  map_users    = var.map_users
}

Expected Behavior

As there are no changes made to terraform files or module, I should see no changes when I run plan.

Actual Behavior

I am getting changes for all of my worker groups when I run terraform plan. E.g. Below is the plan output for one of the worker group

  # module.eks.aws_launch_configuration.workers[0] must be replaced
+/- resource "aws_launch_configuration" "workers" {
        associate_public_ip_address      = false
        ebs_optimized                    = true
        enable_monitoring                = true
        iam_instance_profile             = "sennder-admin-cluster20200106121106098100000008"
      ~ id                               = "sennder-admin-cluster-terraform-12020010612110773630000000d" -> (known after apply)
        image_id                         = "ami-07034b303e1ffc843"
        instance_type                    = "t3.large"
        key_name                         = "eks-dev"
      ~ name                             = "sennder-admin-cluster-terraform-12020010612110773630000000d" -> (known after apply)
        name_prefix                      = "sennder-admin-cluster-terraform-1"
        security_groups                  = [
        ]
      ~ user_data_base64                 = "BIG BLOCK OF BASE64" -> (known after apply) # forces replacement
      - vpc_classic_link_security_groups = [] -> null

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + no_device             = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      ~ root_block_device {
            delete_on_termination = true
          ~ encrypted             = false -> (known after apply)
            iops                  = 0
            volume_size           = 100
            volume_type           = "gp2"
        }
    }

  # module.eks.random_pet.workers[0] must be replaced
+/- resource "random_pet" "workers" {
      ~ id        = "amusing-parakeet" -> (known after apply)
      ~ keepers   = {
          - "lc_name" = "sennder-admin-cluster-terraform-12020010612110773630000000d"
        } -> (known after apply) # forces replacement
        length    = 2
        separator = "-"
    }


  # module.eks.local_file.kubeconfig[0] must be replaced
-/+ resource "local_file" "kubeconfig" {
      ~ content              = "" -> (known after apply) # forces replacement
        directory_permission = "0777"
        file_permission      = "0777"
        filename             = "./kubeconfig_sennder-admin-cluster"
      ~ id                   = "d2ea96f32318207ab27000f3d7ff80a70ae6b321" -> (known after apply)
    }

Steps to Reproduce

  1. terraform init
  2. terraform plan

Additional Context

I found that this is happening for all the old clusters I am having.

@Krishna1408 Krishna1408 changed the title Terraform show changes for autoscaling group out of no where Terraform show changes for launch configurations out of no where Feb 12, 2020
@ghost
Copy link

ghost commented Feb 13, 2020

This issue has been automatically migrated to hashicorp/terraform-provider-aws#12036 because it looks like an issue with that provider. If you believe this is not an issue with the provider, please reply to hashicorp/terraform-provider-aws#12036.

@ghost ghost closed this as completed Feb 13, 2020
@ghost
Copy link

ghost commented Apr 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 1, 2020
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants