Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for stopping a VM before destroying #4920

Closed
gcormier opened this issue Nov 19, 2019 · 5 comments
Closed

Support for stopping a VM before destroying #4920

gcormier opened this issue Nov 19, 2019 · 5 comments

Comments

@gcormier
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

It would be beneficial to stop VM's before destroying. The use case I am running in to is for High Performance computing, where we spin up fairly expensive nodes.

Firstly, Azure VM's incur costs when running : https://docs.microsoft.com/en-us/azure/virtual-machines/windows/states-lifecycle

Using this information, here is what I've found.

First, we have 2 VM's running.

admin@Azure:~$ az vm list -d -g HPC-FVCOM-RG --query '[].[name,powerState,provisioningState]'
[
  [
    "hpc-fvcom-vm1",
    "VM running",
    "Succeeded"
  ],
  [
    "hpc-fvcom-vm2",
    "VM running",
    "Succeeded"
  ]
]

Second, we issue a tf destroy command.

null_resource.prep_ansible: Destroying... [id=42716662409210713]
null_resource.prep_ansible: Destruction complete after 0s
azurerm_virtual_machine.vm[1]: Destroying... [id=/subscriptions/abc-123456/resourceGroups/HPC-FVCOM-RG/providers/Microsoft.Compute/virtualMachines/hpc-fvcom-vm2]
azurerm_virtual_machine.vm[0]: Destroying... [id=/subscriptions/abc-123456/resourceGroups/HPC-FVCOM-RG/providers/Microsoft.Compute/virtualMachines/hpc-fvcom-vm1]
azurerm_virtual_machine.vm[1]: Still destroying... [id=/subscriptions/abc-123456....Compute/virtualMachines/hpc-fvcom-vm2, 10s elapsed]
azurerm_virtual_machine.vm[0]: Still destroying... [id=/subscriptions/abc-123456....Compute/virtualMachines/hpc-fvcom-vm1, 10s elapsed]
admin@Azure:~$ az vm list -d -g HPC-FVCOM-RG --query '[].[name,powerState,provisioningState]'
[
  [
    "hpc-fvcom-vm1",
    "VM running",
    "Deleting"
  ],
  [
    "hpc-fvcom-vm2",
    "",
    "Deleting"
  ]
]

Two things to note, firstly, there's no value for the second VM for powerState*. That's fun! However, for the first one, while the provisioningState is "Deleting", the powerState is still running, which means we're incurring costs.

As Azure can take 20-30 minutes to destroy an instance, this means we're just burning money.

  • Note I've verified the same oddity in AzureRM, so I will open an issue with Microsoft/AzureRM for this
PS Azure:\> Get-AzureRmVM -Status

ResourceGroupName          Name Location          VmSize OsType            NIC Provisioning Zone         PowerState MaintenanceAllowed
-----------------          ---- --------          ------ ------            --- ------------ ----         ---------- ------------------
HPC-FVCOM-RG      hpc-fvcom-vm1   eastus Standard_Hc44rs  Linux hpc-fvcom-nic1     Deleting                 running
HPC-FVCOM-RG      hpc-fvcom-vm2   eastus Standard_Hc44rs  Linux hpc-fvcom-nic2     Deleting      Info Not Available
@tombuildsstuff
Copy link
Contributor

hey @gcormier

Thanks for opening this issue.

As outlined in #2807 we're introducing replacements for the existing azurerm_virtual_machine and azurerm_virtual_machine_scale_set resources in the next major version of the Azure Provider (2.0) - which we're working on at the moment.

At this time we've added support for the new VM Scale Set resources (but they're feature-toggled off at this time) - but of note is that they use this approach of shutting down the VM Scale Set instances prior to deletion. Shortly we'll be starting work on the replacement Virtual Machine resource which will use the same approach - which will solve this issue. However since we're superseding these resources in 2.0, we're not planning to backport this behaviour to the existing azurerm_virtual_machine and azurerm_virtual_machine_scale_set resources.

The replacement resources will be available in an opt-in Beta state in an upcoming 1.x release of the Azure Provider - and will become Generally Available in 2.0. As such I'm going to assign this to the 2.0 milestone for the moment - but I'd suggest subscribing to #2807 for updates, where we'll post information on how to try the Beta in the near future.

Thanks!

@tombuildsstuff tombuildsstuff self-assigned this Nov 20, 2019
@tombuildsstuff
Copy link
Contributor

hi @gcormier

We're currently working on version 2.0 of the Azure Provider which we previously announced in #2807.

As a part of this we're introducing five new resources which will supersede the existing azurerm_virtual_machine and azurerm_virtual_machine_scale_set resources:

  • azurerm_linux_virtual_machine
  • azurerm_linux_virtual_machine_scale_set
  • azurerm_virtual_machine_scale_set_extension
  • azurerm_windows_virtual_machine
  • azurerm_windows_virtual_machine_scale_set

We recently opened #5550 which adds support for the new Virtual Machine resources - and I'm able to confirm that this is supported in the new Virtual Machine resource - however unfortunately we have no plans to backport this to the existing azurerm_virtual_machine resource.

In order to get feedback on these new resources we'll be launching support for these new resources as an opt-in Beta in an upcoming 1.x release of the Azure Provider and ultimately release these as "GA" in the upcoming 2.0 release. We'll post an update in #2807 when both the opt-in Beta (1.x) & GA (2.0) are available - as such I'd recommend subscribing to that issue for updates.

This issue's been assigned to the milestone "2.0" since this is where this will ship - however (due to the way that closing Github Issues from PR's works, to be able to track this back for future users) this issue will be closed once the first of the new resources have been merged.

Thanks!

@tombuildsstuff
Copy link
Contributor

hey @gcormier

As mentioned above support for this is available in the new azurerm_linux_virtual_machine and azurerm_windows_virtual_machine resources which are available in version 1.43 of the Azure Provider by opting into the Beta.

Since support for this is now available via the opt-in Beta I'm going to close this issue for the moment - but these new resources will be going GA in version 2.0 of the Azure Provider in the coming weeks - we'll post an update in #2807 when that's available.

Thanks!

@ghost
Copy link

ghost commented Feb 24, 2020

This has been released in version 2.0.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.0.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Mar 6, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants