-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce virtualization manager #16721
Conversation
@miq-bot add_labels providers/containers, gaprindashvili/no, wip |
|
:foreign_key => :parent_ems_id, | ||
:class_name => "ManageIQ::Providers::Kubevirt::InfraManager", | ||
:autosave => true, | ||
:dependent => :destroy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@moolitayer should this be in a provider repo like monitoring manager?
https://github.com/manageiq/manageiq-providers-openshift/blob/master/app/models/manageiq/providers/openshift/container_manager.rb#L19-19
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zeari due to the inverse reference from https://github.com/ManageIQ/manageiq-providers-kubevirt/pull/6/files#diff-ff17b05a7c525eec833f0ddb0134a0d9R30 - it should reference a common class iiuc.
No, only if kubevirt was deployed (or any potential future virtualization manager)
Yes, I don't see a reason not to support that option. Kubevirt can be deployed on top of an existing kubernetes cluster and to expose the virtualization capabilities. |
OK in that case the implementation would have to change according to #16635 You will see that if you add a kubevirt endpoint through the API the new manager isn't created. I will come see you later today |
@moolitayer I'm aware of that - noticed the behavior of kubevirt endpoints and authentication are left behind once the kubevirt provider is removed. That prevents the kubevirt provider from being added again. Also - added a comment about it to ManageIQ/manageiq-providers-kubernetes#197:
hence, the |
@zeari thanks - I was looking at it closely :-) I'll start adapting this PR and ManageIQ/manageiq-providers-kubernetes#197 based on it. |
@masayag how do the virt manager PRs change the dependency between repos? |
4e3c9bc
to
cee2de3
Compare
@moolitayer that dependency should be defined on the manageiq-providers-kubevirt repository only, right ? |
app/models/ext_management_system.rb
Outdated
@@ -433,7 +433,7 @@ def self.ems_physical_infra_discovery_types | |||
|
|||
def disable! | |||
_log.info("Disabling EMS [#{name}] id [#{id}].") | |||
update!(:enabled => false) | |||
update_attribute("enabled", false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zeari @moolitayer I had to change this since without it, when an endpoint is destroyed, the callback to remove the virtualization provider is triggered. At that point of time there isn't an endpoint for the provider and the validations invoked by update_attribute("enabled", false)
i.e. hostname_format_valid?
fails since there is no endpoint to fetch the hostname from.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont think we need this. It doesnt always look like it in the code but the contianers provider is valid if it has a default endpoint. So when actually using UI\API you should never end up with that hostname issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zeari thanks for the hint - this change is not required.
@miq-bot remove_label wip |
cee2de3
to
893e6bf
Compare
@agrare Could you review please ? |
|
||
def ensure_virtualization_manager | ||
if virtualization_manager.nil? | ||
build_virtualization_manager(:parent_manager => self, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this (build_virtualization_manager
) defined? I don't see it in any of the three PRs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@agrare it's rails method, the name was chosen to reflect the nature of that sub-provider, in the same sense as monitoring_manager was chosen for https://github.com/ManageIQ/manageiq/blob/master/app/models/mixins/has_monitoring_manager_mixin.rb#L22
The purpose of 'virtualization_manager' is to reflect the ability to serve virtual machines by a container provider, by that secondary provider.
If this doesn't make sense to you, I'll modify it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right but in that case there actually is a manager type called MonitoringManager (https://github.com/ManageIQ/manageiq/blob/master/app/models/manageiq/providers/monitoring_manager.rb). This should match the type of manager that is being added.
Should be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The association and mixin should all match the name of the manager that is being created
@@ -46,6 +47,11 @@ class ContainerManager < BaseManager | |||
has_many :all_container_images, :foreign_key => :ems_id, :dependent => :destroy, :class_name => "ContainerImage" | |||
has_many :all_container_nodes, :foreign_key => :ems_id, :dependent => :destroy, :class_name => "ContainerNode" | |||
|
|||
has_one :virtualization_manager, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be has_one :infra_manager
@@ -46,6 +47,11 @@ class ContainerManager < BaseManager | |||
has_many :all_container_images, :foreign_key => :ems_id, :dependent => :destroy, :class_name => "ContainerImage" | |||
has_many :all_container_nodes, :foreign_key => :ems_id, :dependent => :destroy, :class_name => "ContainerNode" | |||
|
|||
has_one :virtualization_manager, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it make more sense to have a Provider
that has multiple managers / ext_management_systems, rather than a manager that has a manager?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ContainerManager already follows the CloudManager pattern with the monitoring_manager so we would need to move that as well.
The CloudManager really drove this and most providers already use it, might make more sense to deal with this at a higher level across all the providers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😭 I thought Openstack was the only one doing this
In order to support kubevirt as a secondary manager of the openshift/kubevirt provider, a new mixin is added to represent that capability.
893e6bf
to
8ffb88c
Compare
@agrare fixed according to your comments. |
Checked commit masayag@8ffb88c with ruby 2.3.3, rubocop 0.52.0, haml-lint 0.20.0, and yamllint 1.10.0 app/models/manageiq/providers/container_manager.rb
|
In order to support kubevirt as a secondary manager of the
openshift/kubevirt provider, a new virtualization manager mixin is added to introduce it.
The corresponding PR of the container providers is ManageIQ/manageiq-providers-kubernetes#197