Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vSphere: Adding 'detach_unknown_disks_on_delete' flag for VM resource #8947

Merged
merged 1 commit into from
Sep 27, 2016

Conversation

dagnello
Copy link
Contributor

Optional, defaults to false. If true, will detach disks not managed by
Terraform VM resource prior to VM deletion.

Issue: #8945

@dkalleg
Copy link
Contributor

dkalleg commented Sep 20, 2016

LGTM

@dagnello
Copy link
Contributor Author

dagnello commented Sep 20, 2016

File resource tests verified in vSphere 5.5 (vCenter 6.0), snippet:

  1. Copy vmdk with file resource
  2. Create another volume with virtual_disk resource
  3. Create VM with both volumes with keep_on_remove = "true", also detach_unknown_disks_on_delete = true for VM resource
  4. terraform apply
  5. in vCenter, create two additional disks and attach to VM created by terraform
  6. terraform destroy
  7. verify volumes created in step 5 are not deleted
  8. test again without detach_unknown_disks_on_delete = true for VM resource
  9. verify volumes created in step 5 are deleted (existing functionality)
davide@harbor-jumpbox:/tmp/tform_rpmgr696288636_da$ terraform apply
vsphere_folder.folder-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Creating...
  datacenter:    "" => "FTC"
  existing_path: "" => "<computed>"
  path:          "" => "davide"
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Creating...
  create_directories: "" => "true"
  datacenter:         "" => "FTC"
  datastore:          "" => "san2"
  destination_file:   "" => "davide/80939488-7e8a-11e6-abf8-005056b66b99/hcp-compute-boot-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99.vmdk"
  source_datacenter:  "" => "FTC"
  source_datastore:   "" => "san2"
  source_file:        "" => "davide_test/trusty64_40g.vmdk"
vsphere_folder.folder-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Creation complete
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (10s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (20s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (30s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (40s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (50s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (1m0s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (1m10s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (1m20s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (1m30s elapsed)
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Creation complete
vsphere_virtual_disk.volume-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Creating...
  adapter_type: "" => "ide"
  datacenter:   "" => "FTC"
  datastore:    "" => "san2"
  size:         "" => "80"
  type:         "" => "thin"
  vmdk_path:    "" => "davide/80939488-7e8a-11e6-abf8-005056b66b99/hcp-docker-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99.vmdk"
vsphere_virtual_disk.volume-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Creation complete
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Creating...
  cdrom.#:                                "" => "1"
  cdrom.0.datastore:                      "" => "san2"
  cdrom.0.path:                           "" => "davide_test/cidata.iso"
  cluster:                                "" => "Cluster2"
  datacenter:                             "" => "FTC"
  detach_unknown_disks_on_delete:         "" => "true"
  disk.#:                                 "" => "2"
  disk.2491840647.bootable:               "" => "true"
  disk.2491840647.controller_type:        "" => "ide"
  disk.2491840647.datastore:              "" => "san2"
  disk.2491840647.iops:                   "" => ""
  disk.2491840647.keep_on_remove:         "" => "true"
  disk.2491840647.key:                    "" => "<computed>"
  disk.2491840647.name:                   "" => ""
  disk.2491840647.size:                   "" => ""
  disk.2491840647.template:               "" => ""
  disk.2491840647.type:                   "" => "eager_zeroed"
  disk.2491840647.uuid:                   "" => "<computed>"
  disk.2491840647.vmdk:                   "" => "davide/80939488-7e8a-11e6-abf8-005056b66b99/hcp-compute-boot-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99.vmdk"
  disk.3336287808.bootable:               "" => "false"
  disk.3336287808.controller_type:        "" => "scsi-paravirtual"
  disk.3336287808.datastore:              "" => "san2"
  disk.3336287808.iops:                   "" => ""
  disk.3336287808.keep_on_remove:         "" => "true"
  disk.3336287808.key:                    "" => "<computed>"
  disk.3336287808.name:                   "" => ""
  disk.3336287808.size:                   "" => ""
  disk.3336287808.template:               "" => ""
  disk.3336287808.type:                   "" => "thin"
  disk.3336287808.uuid:                   "" => "<computed>"
  disk.3336287808.vmdk:                   "" => "davide/80939488-7e8a-11e6-abf8-005056b66b99/hcp-docker-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99.vmdk"
  domain:                                 "" => "vsphere.local"
  enable_disk_uuid:                       "" => "true"
  folder:                                 "" => "davide"
  linked_clone:                           "" => "false"
  memory:                                 "" => "32768"
  memory_reservation:                     "" => "0"
  name:                                   "" => "hcp-kubernetes-node-9b2c99be-7e8f-11e6-abf8-005056b66b99_da"
  network_interface.#:                    "" => "1"
  network_interface.0.ip_address:         "" => "<computed>"
  network_interface.0.ipv4_address:       "" => "<computed>"
  network_interface.0.ipv4_gateway:       "" => "<computed>"
  network_interface.0.ipv4_prefix_length: "" => "<computed>"
  network_interface.0.ipv6_address:       "" => "<computed>"
  network_interface.0.ipv6_gateway:       "" => "<computed>"
  network_interface.0.ipv6_prefix_length: "" => "<computed>"
  network_interface.0.label:              "" => "VM Private"
  network_interface.0.mac_address:        "" => "<computed>"
  network_interface.0.subnet_mask:        "" => "<computed>"
  skip_customization:                     "" => "false"
  time_zone:                              "" => "Etc/UTC"
  uuid:                                   "" => "<computed>"
  vcpu:                                   "" => "4"
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (10s elapsed)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (20s elapsed)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (30s elapsed)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (40s elapsed)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (50s elapsed)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (1m0s elapsed)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Still creating... (1m10s elapsed)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Creation complete

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
davide@harbor-jumpbox:/tmp/tform_rpmgr696288636_da$ terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Refreshing state... (ID: [san2] FTC/davide/80939488-7e8a-11e6-abf8-005056b66b99/hcp-compute-boot-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99.vmdk)
vsphere_folder.folder-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Refreshing state... (ID: FTC/davide)
vsphere_virtual_disk.volume-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Refreshing state... (ID: davide/80939488-7e8a-11e6-abf8-005056b66b99/hcp-docker-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99.vmdk)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Refreshing state... (ID: davide/hcp-kubernetes-node-9b2c99be-7e8f-11e6-abf8-005056b66b99_da)
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Destroying...
vsphere_virtual_machine.hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Destruction complete
vsphere_folder.folder-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Destroying...
vsphere_virtual_disk.volume-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Destroying...
vsphere_folder.folder-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Destruction complete
vsphere_virtual_disk.volume-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Destruction complete
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Destroying...
vsphere_file.bootdisk-hcp_kubernetes_node_9b2c99be-7e8f-11e6-abf8-005056b66b99: Destruction complete

Destroy complete! Resources: 4 destroyed.

@dagnello dagnello changed the title [WIP] Adding 'detach_unknown_disks_on_delete' flag for VM resource Adding 'detach_unknown_disks_on_delete' flag for VM resource Sep 20, 2016
@dagnello
Copy link
Contributor Author

@stack72 @jen20 pr is tested and ready for review

@dagnello dagnello changed the title Adding 'detach_unknown_disks_on_delete' flag for VM resource vSphere: Adding 'detach_unknown_disks_on_delete' flag for VM resource Sep 21, 2016
@jen20
Copy link
Contributor

jen20 commented Sep 22, 2016

Hi @dagnello! This looks good to me at first blush, however it would be nice to se an acceptance test verifying that this behaves correctly, since it is potentially destructive if incorrect.

@dagnello
Copy link
Contributor Author

hello @jen20! great, will add an acceptance test for this.

@dagnello dagnello changed the title vSphere: Adding 'detach_unknown_disks_on_delete' flag for VM resource [WIP] vSphere: Adding 'detach_unknown_disks_on_delete' flag for VM resource Sep 22, 2016
@dagnello dagnello force-pushed the vsphere-vm-disks-detach branch 8 times, most recently from d633ae0 to 0cc6fa2 Compare September 27, 2016 01:02
Optional, defaults to false.  If true, will detach disks not managed by
Terraform VM resource prior to VM deletion.

Issue: hashicorp#8945
@dagnello dagnello force-pushed the vsphere-vm-disks-detach branch from 0cc6fa2 to dfe1cac Compare September 27, 2016 01:16
@dagnello
Copy link
Contributor Author

@jen20 new Acceptance test added: TestAccVSphereVirtualMachine_DetachUnknownDisks

The following is output from TestAccVSphereVirtualMachine_keepOnRemove and TestAccVSphereVirtualMachine_DetachUnknownDisksAcceptance tests:

davide@harbor-jumpbox:~/goland/src/github.com/hashicorp/terraform$ make testacc TEST=./builtin/providers/vsphere/ TESTARGS="-run TestAccVSphereVirtualMachine_keepOnRemove"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/09/26 18:04:55 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/vsphere/ -v -run TestAccVSphereVirtualMachine_keepOnRemove -timeout 120m
=== RUN   TestAccVSphereVirtualMachine_keepOnRemove
2016/09/26 18:05:59 [DEBUG] data= {{      0 0 {0}  [] [] []   [] [] false false false false {    } map[]} VM Private vsphere_virtual_machine.keep_disk  2  }
2016/09/26 18:05:59 [DEBUG] template=
resource "vsphere_virtual_machine" "keep_disk" {
    name = "terraform-test"

%s
    vcpu = 2
    memory = 1024
    network_interface {
        label = "%s"
        ipv4_address = "%s"
        ipv4_prefix_length = %s
        ipv4_gateway = "%s"
    }
     disk {
%s
        template = "%s"
        iops = 500
    }

    disk {
        size = 1
        iops = 500
    controller_type = "scsi"
    name = "one"
    keep_on_remove = true
    }
}
2016/09/26 18:05:59 [DEBUG] template config=
resource "vsphere_virtual_machine" "keep_disk" {
    name = "terraform-test"

    datacenter = "FTC"
    resource_pool = "Cluster1"

    vcpu = 2
    memory = 1024
    network_interface {
        label = "VM Private"
        ipv4_address = ""
        ipv4_prefix_length = 24
        ipv4_gateway = ""
    }
     disk {
        datastore = "datastore1"

        template = "DansTfTest/danTestTemplate"
        iops = 500
    }

    disk {
        size = 1
        iops = 500
    controller_type = "scsi"
    name = "one"
    keep_on_remove = true
    }
}
--- PASS: TestAccVSphereVirtualMachine_keepOnRemove (225.09s)
PASS
ok      github.com/hashicorp/terraform/builtin/providers/vsphere    225.120s
davide@harbor-jumpbox:~/goland/src/github.com/hashicorp/terraform$ make testacc TEST=./builtin/providers/vsphere/ TESTARGS="-run TestAccVSphereVirtualMachine_DetachUnknownDisks"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/09/26 18:10:40 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/vsphere/ -v -run TestAccVSphereVirtualMachine_DetachUnknownDisks -timeout 120m
=== RUN   TestAccVSphereVirtualMachine_DetachUnknownDisks
2016/09/26 18:11:44 [DEBUG] data= {{      0 0 {0}  [] [] []   [] [] false false false false {    } map[]} VM Private vsphere_virtual_machine.detach_unkown_disks  4  }
2016/09/26 18:11:44 [DEBUG] template=
resource "vsphere_virtual_machine" "detach_unkown_disks" {
    name = "terraform-test"

%s
    vcpu = 2
    memory = 1024
    network_interface {
        label = "%s"
        ipv4_address = "%s"
        ipv4_prefix_length = %s
        ipv4_gateway = "%s"
    }
     disk {
%s
        template = "%s"
        iops = 500
    }

    detach_unknown_disks_on_delete = true
    disk {
        size = 1
        iops = 500
    controller_type = "scsi"
    name = "one"
    keep_on_remove = true
    }
    disk {
        size = 2
        iops = 500
    controller_type = "scsi"
    name = "two"
    keep_on_remove = false
    }
    disk {
        size = 3
        iops = 500
    controller_type = "scsi"
    name = "three"
    keep_on_remove = true
    }
}
2016/09/26 18:11:44 [DEBUG] template config=
resource "vsphere_virtual_machine" "detach_unkown_disks" {
    name = "terraform-test"

    datacenter = "FTC"
    resource_pool = "Cluster1"

    vcpu = 2
    memory = 1024
    network_interface {
        label = "VM Private"
        ipv4_address = ""
        ipv4_prefix_length = 24
        ipv4_gateway = ""
    }
     disk {
        datastore = "datastore1"

        template = "DansTfTest/danTestTemplate"
        iops = 500
    }

    detach_unknown_disks_on_delete = true
    disk {
        size = 1
        iops = 500
    controller_type = "scsi"
    name = "one"
    keep_on_remove = true
    }
    disk {
        size = 2
        iops = 500
    controller_type = "scsi"
    name = "two"
    keep_on_remove = false
    }
    disk {
        size = 3
        iops = 500
    controller_type = "scsi"
    name = "three"
    keep_on_remove = true
    }
}
--- PASS: TestAccVSphereVirtualMachine_DetachUnknownDisks (220.69s)
PASS
ok      github.com/hashicorp/terraform/builtin/providers/vsphere    220.724s

@dagnello dagnello changed the title [WIP] vSphere: Adding 'detach_unknown_disks_on_delete' flag for VM resource vSphere: Adding 'detach_unknown_disks_on_delete' flag for VM resource Sep 27, 2016
@stack72
Copy link
Contributor

stack72 commented Sep 27, 2016

Thanks for adding the test @dagnello :) This LGTM!

@dagnello
Copy link
Contributor Author

@stack72 thank you! do you know when the next release is scheduled for?

@stack72
Copy link
Contributor

stack72 commented Sep 27, 2016

Hi @dagnello

No official date yet - i'd say a week or so depending on what we have to get merged

P.

@dagnello
Copy link
Contributor Author

Hello @stack72, sounds good. Thanks

Davide

@ghost
Copy link

ghost commented Apr 22, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 22, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants