Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Link volumes to corresponding VM disks #125

Merged
merged 1 commit into from
Feb 3, 2017

Conversation

gberginc
Copy link
Contributor

@gberginc gberginc commented Jan 30, 2017

This patch currently works only for the legacy refresh parser. It will look for the instance (VM) referenced in the attachment section of the volume data and create a backing link between the volume and instance's disk. Linking is done similar to the OpenStack (Cinder) provider.

Inventory object doesn't work yet as I can't figure out how to properly get the hardware and disks from the corresponding :vm inventory object.

Specs will also fail because I am using local VCR with our own account. They are nevertheless provided for inspection.

@miq-bot add_label wip, enhancement
@miq-bot assign @Ladas

Depends on: #122. The spec will be updated with the data from the VCR received from this PR.

Copy link
Contributor Author

@gberginc gberginc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ladas I would appreciate any comment regarding the inventory object as I just can't find the solution myself.

@@ -41,6 +41,10 @@ def parse_volume(volume)
:base_snapshot => inventory_collections[:cloud_volume_snapshots].lazy_find(volume['snapshot_id']),
:availability_zone => inventory_collections[:availability_zones].lazy_find(volume['availability_zone'])
}

link_volume_to_disk(volume_hash, volume['attachments'])
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure whether the linking should happen here or at some other place (for example, should I only store the attachment here and implement linking at some other place?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, so looking at the OpenStack and a current implementation, seems like ephemeral disks are filled on the CloudManager side and Volumes are added to that on the StorageManager side.

We could probably set it all from the CloudManager side. Or not sure if the AWS even supports ephemeral disks? So local VM disks, if not, we could move it all to storage manager.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because storage manager is not aware of the ephemeral disks, I kept it in the cloud manager (basically, ephemeral disks are only created because the flavor itself specifies how many local disks there are).

next
end

hardware = vm.hardware
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get something for the VM, but definitely not the correct thing. Not sure how to get there. You can see in the refresh_parser what's needed: https://github.com/ManageIQ/manageiq-providers-amazon/pull/125/files#diff-9ed63d88e4d31be7b6b921a61efec9fdR100

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this would do query per each disk. You can rather load Hardware InventoryCollection, hardware uses the same id so then you can just do

hardware = inventory_collections[:hardwares].find(a['instance_id'])

@@ -860,4 +862,51 @@ def assert_specific_vm_in_other_region
def assert_relationship_tree
expect(@ems.descendants_arranged).to match_relationship_tree({})
end

def assert_specific_snapshot
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These specs here are just used to demonstrate what I plan to test once everything works. The legacy parser passes both specs.


hardware = vm.hardware
disks = hardware.disks
unless disks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is strange condition, not sure what this means for OpenStack, but if I have only volumes, disks are always empty

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can keep this now because we are storing disks for attached volumes in the cloud manager as well.


hardware = vm.hardware
disks = hardware.disks
unless disks
Copy link
Contributor

@Ladas Ladas Jan 31, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have 'disks' always empty, when I have only volumes on a VM

so I think this whole code block should not be needed

disk_hash[:id] = disk.id
else
# New disk.
disk_hash[:hardware_id] = hardware.id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change this to

disk_hash[:hardware] = hardware

since the unique key on disk is defined as [:hardware, :location]

:backing_volume => volume_hash
}

if (disk = disks.detect { |d| d.location == dev })
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I this whole branch is not needed, :id is ignored when saving

@@ -309,4 +309,12 @@ def cloud_object_store_objects_init_data(extra_attributes = {})

init_data(::CloudObjectStoreObject, attributes, extra_attributes)
end

def backing_links_init_data(extra_attributes = {})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't need a special definition here, just use disks_init_data

@miq-bot
Copy link
Member

miq-bot commented Jan 31, 2017

This pull request is not mergeable. Please rebase and repush.

@gberginc gberginc force-pushed the link_volumes_to_disks branch from d18cc6e to fee2210 Compare February 2, 2017 11:53
@gberginc gberginc force-pushed the link_volumes_to_disks branch 3 times, most recently from b810332 to b757667 Compare February 2, 2017 13:48
Copy link
Contributor Author

@gberginc gberginc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ladas my local test for stubbed AWS went through. I updated this PR to see if there are any other tests failing - hope not :-).


hardware = vm.hardware
disks = hardware.disks
unless disks
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can keep this now because we are storing disks for attached volumes in the cloud manager as well.

@@ -41,6 +41,10 @@ def parse_volume(volume)
:base_snapshot => inventory_collections[:cloud_volume_snapshots].lazy_find(volume['snapshot_id']),
:availability_zone => inventory_collections[:availability_zones].lazy_find(volume['availability_zone'])
}

link_volume_to_disk(volume_hash, volume['attachments'])
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because storage manager is not aware of the ephemeral disks, I kept it in the cloud manager (basically, ephemeral disks are only created because the flavor itself specifies how many local disks there are).

@gberginc gberginc changed the title [WIP Link volumes to corresponding VM disks Link volumes to corresponding VM disks Feb 2, 2017
@gberginc
Copy link
Contributor Author

gberginc commented Feb 2, 2017

@miq-bot remove_label wip

@Ladas I am removing wip as it looks like we now have both parsers working. Thanks for your great support!

@miq-bot miq-bot removed the wip label Feb 2, 2017

dev = File.basename(a['device'])

vm = @ems.parent_manager.vms.detect { |v| v.ems_ref == a['instance_id'] }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ems.parent_manager.vms.find_by(:ems_ref => a['instance_id'])

this will do 1 small query to the db, otherwise you are doing fetch me all vms and then O(n) search in Ruby. :-)

This patch updates the EBS storage manager with the ability to
cross-link attached cloud volumes with corresponding VM disks. Since the
cloud manager already creates disk entries based on the device block
mappings that are available to the insntace, EBS storage manager just
updates the disk with it's size and creates a backing link. Legacy
refresh parser as well as the new inventory object are extended to
support the cross-linking.

This patch also adds stubbed data mocking an attachment for a specific
cloud volume and verifies that the EBS storage manager properly updates
the existing disk.

Signed-off-by: Gregor Berginc <[email protected]>
@gberginc gberginc force-pushed the link_volumes_to_disks branch from b757667 to 9dc9b7d Compare February 3, 2017 08:46
@miq-bot
Copy link
Member

miq-bot commented Feb 3, 2017

Checked commit gberginc@9dc9b7d with ruby 2.2.6, rubocop 0.47.1, and haml-lint 0.20.0
5 files checked, 1 offense detected

spec/models/manageiq/providers/amazon/aws_stubs.rb

  • ❗ - Line 397, Col 40 - Rails/TimeZone - Do not use Time.now without zone. Use one of Time.zone.now, Time.current, Time.now.in_time_zone, Time.now.utc, Time.now.getlocal, Time.now.iso8601, Time.now.jisx0301, Time.now.rfc3339, Time.now.to_i, Time.now.to_f instead.

@Ladas
Copy link
Contributor

Ladas commented Feb 3, 2017

@gberginc awesome 👍

@Ladas Ladas merged commit 0c5264e into ManageIQ:master Feb 3, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants