Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform errantly reports need to update. State file out of sync with deployed resources when no modifications have been made. #44

Closed
hashibot opened this issue Jun 13, 2017 · 5 comments

Comments

@hashibot
Copy link

This issue was originally opened by @jzampieron as hashicorp/terraform#12552. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.8.8

Affected Resource(s)

Please list the resources as a list, for example:

  • azurerm_virtual_machine

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "azurerm_resource_group" "test" {
   name     = "${var.res_group}"
   location = "${var.azurerm_region}"
}

resource "azurerm_virtual_network" "test" {
  name                = "acctvn"
  address_space       = ["10.0.0.0/16"]
  location            = "${var.azurerm_region}"
  resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
  name                 = "acctsub"
  resource_group_name  = "${azurerm_resource_group.test.name}"
  virtual_network_name = "${azurerm_virtual_network.test.name}"
  address_prefix       = "10.0.2.0/24"
}

resource "azurerm_public_ip" "test" {
    name                         = "acceptanceTestPublicIp${count.index}"
    location                     = "${var.azurerm_region}"
    resource_group_name          = "${azurerm_resource_group.test.name}"
    public_ip_address_allocation = "dynamic"
    count                        = 2
    tags {
        environment = "${var.instance_env}"
    }
}

resource "azurerm_network_interface" "test" {
  name                = "acctni${count.index}"
  location            = "${var.azurerm_region}"
  resource_group_name = "${azurerm_resource_group.test.name}"
  count               = 2
  ip_configuration {
    name                          = "testconfiguration1"
    subnet_id                     = "${azurerm_subnet.test.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id          = "${element( azurerm_public_ip.test.*.id, count.index )}"
  }
}

resource "azurerm_storage_account" "test" {
  name                = "${var.instance_name}tftestsa"
  resource_group_name = "${azurerm_resource_group.test.name}"
  location            = "${var.azurerm_region}"
  account_type        = "Standard_LRS"

  tags {
    environment = "${var.instance_env}"
  }
}

resource "azurerm_storage_container" "test" {
  name                  = "vhds"
  resource_group_name   = "${azurerm_resource_group.test.name}"
  storage_account_name  = "${azurerm_storage_account.test.name}"
  container_access_type = "private"
}

# Name for the OS disks.
# This is generated w/ a UUID b/c azure can't figure out how to reattach
# _OR_ create if not exists.
# This way, terraform will never delete a disk by accident.
data "template_file" "azurerm_vm_osdisk" {
  template = "acctvm%05d-osdisk-${uuid()}"
}

# Name for the OS disks.
# This is generated w/ a UUID b/c azure can't figure out how to reattach
# _OR_ create if not exists.
# This way, terraform will never delete a disk by accident.
data "template_file" "azurerm_vm_datadisk" {
  template = "acctvm%05d-datadisk-${uuid()}"
}

resource "azurerm_virtual_machine" "test" {
  name                  = "${format( "acctvm%05d", count.index )}"
  location              = "${var.azurerm_region}"
  resource_group_name   = "${azurerm_resource_group.test.name}"
  network_interface_ids = ["${element( azurerm_network_interface.test.*.id, count.index )}"]
  vm_size               = "Standard_A0"
  delete_os_disk_on_termination = true
  count                         = 2
  lifecycle {
     ignore_changes = [ "storage_os_disk", "storage_data_disk" ]
  }

  # Can use: az vm image list-publishers --location eastus2 to help here.
  storage_image_reference {
    publisher = "CoreOS"
    offer     = "CoreOS"
    sku       = "Stable"
    version   = "1235.9.0"
  }

  storage_os_disk {
    name          = "${ format( data.template_file.azurerm_vm_osdisk.rendered, count.index ) }"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/${ format( data.template_file.azurerm_vm_osdisk.rendered, count.index ) }.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  # Be really really careful here. If, for some reason Terraform has to nuke
  # and rebuild the VM... you will get a new volume by UUID. This is
  # goofy, but ON PURPOSE b/c Azure API doesn't understand CREATE IF NOT EXISTS,
  # otherwise ATTACH to the image.
  # It's setup so you could manually go back and recover the _old_
  # VHDs and reattach them to the vms using the VM count.index
  # if you really had to.
  # AKA: Efforts are made to Preserve the VHDs, even at the cost of
  # wasting space and/or having dangling VHDs in the storage account.
  storage_data_disk {
    name          = "${ format( data.template_file.azurerm_vm_datadisk.rendered, count.index ) }"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/${ format( data.template_file.azurerm_vm_datadisk.rendered, count.index ) }.vhd"
    disk_size_gb  = "512"
    # See: https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/virtualmachines-create-or-update#Anchor_2
    # There isn't an obvious "create if not exists, otherwise attach option. "
    create_option = "Empty"
    lun           = 0
  }

  os_profile {
    computer_name  = "hostname"
    admin_username = "testadmin"
    admin_password = "Password1234!"
    # Base64 encoded ... This is an "Ignition" configuration file for coreos.
    custom_data    = "${base64encode( file( "config.ign" ) )}"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags {
    environment = "${var.instance_env}"
  }
}

Expected Behavior

Terraform should report that the state file is in sync with the tf files.

Actual Behavior

terraform plan reports the need to update the resources.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. terraform plan -> Note that terraform still tries to report modifications.

Important Factoids

I believe this is related to the base64encode() function not storing the proper value in the state file for the custom_data element.

The state file shows the raw JSON content instead of the base64encoded string shown in the plan output.

It's uncertain what's actually running on the cluster, although I believe it's correct b/c the API call works and the CoreOS configuration is updated.

@hashibot hashibot added the bug label Jun 13, 2017
@rcarun rcarun added this to the M1 milestone Oct 11, 2017
@rcarun rcarun added acs and removed acs labels Oct 24, 2017
@achandmsft achandmsft modified the milestones: M1, 1.4.0 Mar 8, 2018
@achandmsft
Copy link
Contributor

@tombuildsstuff could this be related to #148? If so, could you please close this one after linking it to that issue.

@tombuildsstuff tombuildsstuff modified the milestones: 1.4.0, Temp/To Be Sorted Apr 17, 2018
@achandmsft achandmsft modified the milestones: Temp/To Be Sorted, 1.4.0 Apr 19, 2018
@tombuildsstuff tombuildsstuff modified the milestones: 1.4.0, 1.5.0 Apr 25, 2018
@tombuildsstuff tombuildsstuff modified the milestones: 1.5.0, 1.6.0 May 8, 2018
@tombuildsstuff tombuildsstuff modified the milestones: 1.6.0, Soon May 10, 2018
@tombuildsstuff
Copy link
Contributor

Confirmed this is still a bug, thought this was to do with the custom_data being returned as an empty string - but requires further investigation. Here's a full working config:

resource "azurerm_resource_group" "test" {
  name     = "tom-dev2"
  location = "central us"
}

resource "azurerm_virtual_network" "test" {
  name                = "acctvn"
  address_space       = ["10.0.0.0/16"]
  location            = "${azurerm_resource_group.test.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
}

resource "azurerm_subnet" "test" {
  name                 = "acctsub"
  resource_group_name  = "${azurerm_resource_group.test.name}"
  virtual_network_name = "${azurerm_virtual_network.test.name}"
  address_prefix       = "10.0.2.0/24"
}

resource "azurerm_public_ip" "test" {
  name                         = "acceptanceTestPublicIp${count.index}"
  location                     = "${azurerm_resource_group.test.location}"
  resource_group_name          = "${azurerm_resource_group.test.name}"
  public_ip_address_allocation = "dynamic"
  count                        = 2
}

resource "azurerm_network_interface" "test" {
  name                = "acctni${count.index}"
  location            = "${azurerm_resource_group.test.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
  count               = 2

  ip_configuration {
    name                          = "testconfiguration1"
    subnet_id                     = "${azurerm_subnet.test.id}"
    private_ip_address_allocation = "dynamic"
    public_ip_address_id          = "${element( azurerm_public_ip.test.*.id, count.index )}"
  }
}

resource "azurerm_storage_account" "test" {
  name                     = "tomdevtftestsa"
  resource_group_name      = "${azurerm_resource_group.test.name}"
  location                 = "${azurerm_resource_group.test.location}"
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "azurerm_storage_container" "test" {
  name                  = "vhds"
  resource_group_name   = "${azurerm_resource_group.test.name}"
  storage_account_name  = "${azurerm_storage_account.test.name}"
  container_access_type = "private"
}

# Name for the OS disks.
# This is generated w/ a UUID b/c azure can't figure out how to reattach
# _OR_ create if not exists.
# This way, terraform will never delete a disk by accident.
data "template_file" "azurerm_vm_osdisk" {
  template = "acctvm%05d-osdisk-${uuid()}"
}

# Name for the OS disks.
# This is generated w/ a UUID b/c azure can't figure out how to reattach
# _OR_ create if not exists.
# This way, terraform will never delete a disk by accident.
data "template_file" "azurerm_vm_datadisk" {
  template = "acctvm%05d-datadisk-${uuid()}"
}

resource "azurerm_virtual_machine" "test" {
  name                          = "${format( "acctvm%05d", count.index )}"
  location                      = "${azurerm_resource_group.test.location}"
  resource_group_name           = "${azurerm_resource_group.test.name}"
  network_interface_ids         = ["${element( azurerm_network_interface.test.*.id, count.index )}"]
  vm_size                       = "Standard_A0"
  delete_os_disk_on_termination = true
  count                         = 2

  lifecycle {
    ignore_changes = ["storage_os_disk", "storage_data_disk"]
  }

  # Can use: az vm image list-publishers --location eastus2 to help here.
  storage_image_reference {
    publisher = "CoreOS"
    offer     = "CoreOS"
    sku       = "Stable"
    version   = "1235.9.0"
  }

  storage_os_disk {
    name          = "${ format( data.template_file.azurerm_vm_osdisk.rendered, count.index ) }"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/${ format( data.template_file.azurerm_vm_osdisk.rendered, count.index ) }.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }

  # Be really really careful here. If, for some reason Terraform has to nuke
  # and rebuild the VM... you will get a new volume by UUID. This is
  # goofy, but ON PURPOSE b/c Azure API doesn't understand CREATE IF NOT EXISTS,
  # otherwise ATTACH to the image.
  # It's setup so you could manually go back and recover the _old_
  # VHDs and reattach them to the vms using the VM count.index
  # if you really had to.
  # AKA: Efforts are made to Preserve the VHDs, even at the cost of
  # wasting space and/or having dangling VHDs in the storage account.
  storage_data_disk {
    name         = "${ format( data.template_file.azurerm_vm_datadisk.rendered, count.index ) }"
    vhd_uri      = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/${ format( data.template_file.azurerm_vm_datadisk.rendered, count.index ) }.vhd"
    disk_size_gb = "512"

    # See: https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/virtualmachines-create-or-update#Anchor_2
    # There isn't an obvious "create if not exists, otherwise attach option. "
    create_option = "Empty"

    lun = 0
  }

  os_profile {
    computer_name  = "hostname"
    admin_username = "testadmin"
    admin_password = "Password1234!"

    # Base64 encoded ... This is an "Ignition" configuration file for coreos.
    custom_data = "${base64encode( file( "config.ign" ) )}"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }
}

@katbyte
Copy link
Collaborator

katbyte commented May 17, 2018

Custom data can now be updated, however API always returns nothing so the value needs to be passed along or DiffSupressed so there isn't always a diff.

@katbyte katbyte self-assigned this May 17, 2018
@katbyte katbyte removed their assignment Oct 22, 2018
@tombuildsstuff tombuildsstuff modified the milestones: Soon, Being Sorted Oct 25, 2018
@tombuildsstuff
Copy link
Contributor

👋

Taking a look into this this appears to be a duplicate of #1013 in that the VM resource doesn't handle the Custom Data correctly; we're planning on looking at the VM resource as a part of 2.0 - as such I'm going to close this issue in favour of #1013.

Thanks!

@tombuildsstuff tombuildsstuff removed this from the Being Sorted milestone Oct 25, 2018
@ghost
Copy link

ghost commented Mar 6, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 6, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants