-
Notifications
You must be signed in to change notification settings - Fork 356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rhv: removed the option to migrate the VMs outside of the cluster. #207
Conversation
OK, so..
But.. if none of the providers has a So .. sounds like surprisingly this lets you migrate the same VM on a provider that's less capable, but not on one that is more. Are you sure that's the correct behaviour? Or is this missing some extra condition? Apart from that, it also feels weird that you generate a flash that it can't be done .. and then do it anyway. |
Added |
@himdel they way I understand this, the implementation of Hope that made sense :-) |
@matobet right, makes complete sense, thanks! :) In that case, can you add a small comment explaining that please? But a question still remains: what happens after you've created that flash message? (I mean, I would have kinda expected to see something like |
@himdel not sure how the flash infrastructure works but with with the current |
Alright, I'll assume there's a if flash array not empty, do nothing logic somewhere. :) |
@miq-bot add_label euwe/yes |
@@ -1790,6 +1790,15 @@ def task_supported?(typ) | |||
end | |||
|
|||
vms = VmOrTemplate.where(:id => vm_ids) | |||
if typ == "migrate" | |||
if vms.map(&:ext_management_system).uniq.compact.any? do |ems| | |||
ems.respond_to?(:supports_migrate_for_all?) && !ems.supports_migrate_for_all?(vms) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok now I see this logic could be a bit improved also. We are passing all selected vms to each individual ems
to check whether they can be migrated together but IMO each ems
should have only say about his own subset of vms. Of course this accidentally works since the RHV implementation checks for the same cluster id which across providers will never hold, but that I don't believe that is the robustness we want :-) Will post a fix.
if vms.group_by(&:ext_management_system).except(nil).any? do |ems, ems_vms| | ||
ems.respond_to?(:supports_migrate_for_all?) && !ems.supports_migrate_for_all?(ems_vms) | ||
end | ||
add_flash(_("This VMs can not be migrated together."), :error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This These :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heheh, the horrors it would see.. :)
The support for cross cluster migrations was added to oVirt only as a workaround for el6->el7 migrations but should not be exposed. Since the current code in manageiq anyway did not work properly, removing the support for it completely - it is a low level functionality, it is obsoleted and discouraged to be used. Links https://bugzilla.redhat.com/show_bug.cgi?id=1398287
LGTM, merging when green :) |
Checked commit matobet@926812b with ruby 2.2.6, rubocop 0.47.1, and haml-lint 0.20.0 |
Euwe backport (to manageiq repo) details:
|
The support for cross cluster migrations was added to oVirt only as a
workaround for el6->el7 migrations but should not be exposed. Since the current
code in manageiq anyway did not work properly, removing the support for it
completely - it is a low level functionality, it is obsoleted and discouraged
to be used.
Links
https://bugzilla.redhat.com/show_bug.cgi?id=1398287