-
Notifications
You must be signed in to change notification settings - Fork 165
[1LP][RFR]Fixing the TC test_cancel_migration_attachments #9900
[1LP][RFR]Fixing the TC test_cancel_migration_attachments #9900
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good Nadeem , instead of time.sleep we can use wait_for_in_progress method.
@@ -174,32 +174,27 @@ def _cleanup(): | |||
migration_plan.wait_for_state("Started") | |||
request_details_list = migration_plan.get_plan_vm_list(wait_for_migration=False) | |||
vm_detail = request_details_list.read()[0] | |||
time.sleep(360) # grace time for starting disk migration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of this you can check if migration plan is in progress by doing
migration_plan.wait_for_state("In_Progress") and then cancel
Signed-off-by: mnadeem92 <[email protected]>
5f22985
to
342aa5f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for PR 👍 LGTM
but like to understand test purpose specially checks.
# Test1: Check if instance is on openstack/rhevm provider | ||
soft_assert(not provider.mgmt.find_vms(name=vm_obj.name)) | ||
soft_assert(not vm_on_dest) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm suspect about this check.
why soft_assert?
are we expect that VM on Openstack/rhevm
provider? if yes, why negation in an assertion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, we are not expecting VM at destination.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TC gets failed if VM gets migrated to Destination, However for OSP the port and volumes gets attached to the VM once it gets migrated , So for OSP, we need to do additional check for port and volumes if VM gets migrated to Destination.
The Test case was failing as the cancel never run due to the check we put for disk migration gets 20%, I noticed that the progress bar never shows real time value and after full disk gets migrated the progress bar certainly reached to 100%, which cause the TC failure as till the time cancel gets triggered the VM gets migrated 100% to the destination. This PR will not look for 20% instead gives a grace time of 6 minutes for migration to proceed and then cancel the migration.
{{ pytest: cfme/tests/v2v/test_v2v_cancel_migrations.py -k "test_cancel_migration_attachments" --use-provider osp13-ims --use-provider vsphere67-ims --provider-limit 2 -v }}