-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Snapshot Code for Azure Managed Disks #117
Changes from 4 commits
9b32203
cc6cc7e
676448c
8f8c68a
527507a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -114,4 +114,56 @@ def vm_reboot_guest(vm, _options = {}) | |
rescue => err | ||
_log.error "vm=[#{vm.name}], error: #{err}" | ||
end | ||
|
||
def vm_create_evm_snapshot(vm, options = {}) | ||
conf = connect(options) | ||
vm_svc = vm.provider_service(conf) | ||
snap_svc = snapshot_service(conf) | ||
vm_obj = vm_svc.get(vm.name, vm.resource_group) | ||
return unless vm_obj.managed_disk? | ||
os_disk = vm_obj.properties.storage_profile.os_disk | ||
snap_options = { :location => vm.location, | ||
:properties => { | ||
:creationData => { | ||
:createOption => "Copy", | ||
:sourceResourceId => os_disk.managed_disk.id | ||
} | ||
} } | ||
snap_name = os_disk.name + "__EVM__SSA__SNAPSHOT" | ||
_log.debug("vm=[#{vm.name}] creating SSA snapshot #{snap_name}") | ||
begin | ||
snap_svc.get(snap_name, vm.resource_group) | ||
rescue ::Azure::Armrest::NotFoundException, ::Azure::Armrest::ResourceNotFoundException => err | ||
begin | ||
snap_svc.create(snap_name, vm.resource_group, snap_options) | ||
return snap_name | ||
rescue => err | ||
_log.error("vm=[#{vm.name}], error: #{err}") | ||
_log.debug { err.backtrace.join("\n") } | ||
raise "Error #{err} creating SSA Snapshot #{snap_name}" | ||
end | ||
end | ||
_log.error("SSA Snapshot #{snap_name} already exists.") | ||
raise "Snapshot #{snap_name} already exists. Another SSA request for this VM is in progress or a previous one failed to clean up properly." | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Who calls this method? I'm seeing the possibility of a caller having to react to two different error conditions in possibly two different ways. One error condition is when it can't create the snapshot because of an error (probably in Azure). And another error condition when an SSA request is already being processed for this VM. Maybe the "snapshot already exists" situation could be a specific error type? Then, the caller could rescue that error type in its own block with a special handler. And all other errors could be assumed to be an actual failure to process the request by the provider. I could also see the "snapshot already exists" situation being a non-error, and just log a warning and return Thoughts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Who calls this method? I'm seeing the possibility of a caller having to react to two different error conditions in possibly two different ways. One error condition is when it can't create the snapshot because of an error (probably in Azure). And another error condition when an SSA request is already being processed for this VM. Maybe the "snapshot already exists" situation could be a specific error type? Then, the caller could rescue that error type in its own block with a special handler. And all other errors could be assumed to be an actual failure to process the request by the provider. I could also see the "snapshot already exists" situation being a non-error, and just log a warning and return Thoughts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @blomquisg It is called by VmScan.create_snapshot. We have similar logic for the Scvmm code as well (although some of it is embedded in the manageiq-smartstate repo with the separation of code into different repos) It really doesn't matter to VmScan if we can't perform the snapshot because of a provider error or because the snapshot already exists - the SSA request needs to fail in either case. The "snapshot already exists" situation cannot be a non-error - if there really is another request to run SSA on the same instance/image that we somehow missed (which should not happen) a problem would arise when the first request finished and therefore deleted the snapshot. Since these requests are stateless and the snapshot disk isn't actually mounted or opened the second request would then run into an unexplained error. Let me know what you think. @roliveri, feel free to chime in if you feel otherwise. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Historically, Maybe there should be a general There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @roliveri you mean as a follow-on task for all the providers needing it, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. If a provider needs a general create snapshot facility, then it makes sense to structure the code that way. As it stands now, |
||
end | ||
|
||
def vm_delete_evm_snapshot(vm, options = {}) | ||
conf = connect(options) | ||
vm_svc = vm.provider_service(conf) | ||
snap_svc = snapshot_service(conf) | ||
vm_obj = vm_svc.get(vm.name, vm.resource_group) | ||
os_disk = vm_obj.properties.storage_profile.os_disk | ||
snap_name = os_disk.name + "__EVM__SSA__SNAPSHOT" | ||
_log.debug("vm=[#{vm.name}] deleting SSA snapshot #{snap_name}") | ||
snap_svc.delete(snap_name, vm.resource_group) | ||
rescue => err | ||
_log.error("vm=[#{vm.name}], error: #{err}") | ||
_log.debug { err.backtrace.join("\n") } | ||
end | ||
|
||
def snapshot_service(connection = nil) | ||
_log.debug("Enter") | ||
connection ||= connect | ||
::Azure::Armrest::Storage::SnapshotService.new(connection) | ||
end | ||
end |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,9 @@ | ||
module ManageIQ::Providers::Azure::CloudManager::VmOrTemplateShared | ||
extend ActiveSupport::Concern | ||
include_concern 'Scanning' | ||
|
||
def provider_service(connection = nil) | ||
connection ||= ext_management_system.connect | ||
::Azure::Armrest::VirtualMachineService.new(connection) | ||
end | ||
end |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -14,12 +14,12 @@ def perform_metadata_scan(ost) | |
require 'MiqVm/miq_azure_vm' | ||
|
||
vm_args = { :name => name } | ||
_log.debug "name: #{name} (template = #{template})" | ||
_log.debug("name: #{name} (template = #{template})") | ||
if template | ||
_log.debug "image_uri: #{uid_ems}" | ||
_log.debug("image_uri: #{uid_ems}") | ||
vm_args[:image_uri] = uid_ems | ||
else | ||
_log.debug "resource_group: #{resource_group}" | ||
_log.debug("resource_group: #{resource_group}") | ||
vm_args[:resource_group] = resource_group | ||
end | ||
|
||
|
@@ -56,4 +56,18 @@ def has_proxy? | |
def requires_storage_for_scan? | ||
false | ||
end | ||
|
||
def require_snapshot_for_scan? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @jerryk55 So, when you add snapshots for non-managed disks, this method will always return true? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @roliveri no, when you check to see if you need to add a snapshot for non-managed disks, it will always return false, won't it? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @jerryk55 I thought you said that they suggested that both, managed and non-managed disks, always be snapshotted. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh yes I misunderstood your original question. Yes - when I add the other code for non-managed disk snapshots - this method will simply return true all the time. |
||
begin | ||
vm_obj = provider_service.get(name, resource_group) | ||
if vm_obj.managed_disk? | ||
_log.debug("VM #{name} has a Managed Disk - Snapshot required for Scan") | ||
return true | ||
end | ||
rescue => err | ||
_log.error("Error Class=#{err.class.name}, Message=#{err.message}") | ||
end | ||
_log.debug("VM #{name} does not have a Managed Disk - no Snapshot required for Scan") | ||
false | ||
end | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefer interpolation over concatenation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Fryguy understood. Updated.