Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

orchestrate destroy of dependent managers #15590

Merged
merged 1 commit into from
Sep 5, 2017

Conversation

durandom
Copy link
Member

@durandom durandom commented Jul 18, 2017

With orchestrated destroy of managers we wait until all workers
are shutdown before actually destroying the ems.

This means, all dependent workers, like network_manager, must
be destroyed as well before we can destroy the cloud manager

I introduce a child_manager association and queues up a destroy of all child_managers

#14848

@cben @Ladas @jrafanie

@miq-bot miq-bot added the wip label Jul 18, 2017
@cben
Copy link
Contributor

cben commented Jul 18, 2017

@zeari

@durandom durandom force-pushed the orchestrate_destroy branch from e346e05 to bfdb363 Compare July 19, 2017 13:24
@durandom durandom changed the title [WIP] orchestrate destroy of dependent managers orchestrate destroy of dependent managers Jul 19, 2017
@@ -26,6 +26,8 @@ def self.supported_types_and_descriptions_hash
end

belongs_to :provider
has_many :child_managers, :class_name => 'ExtManagementSystem', :foreign_key => 'parent_ems_id'
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I called this child_managers (which is the opposite of parent_manager)
Although I'd like e.g. dependent_managers better

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope i'm not missing context here
We have several different child managers in the system. They are currently available through ext.monitoring_manager ext.network_manager etc.
It's a good idea to treat them all the same way here. Maybe we can define the relation based on parent_ems_id that they all should have?

cc @Ladas

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can define the relation based on parent_ems_id that they all should have?

@moolitayer I dont understand. All child managers are linked via the parent_ems_id, so it will catch all, monitoring, network, etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just having the child_managers relation will not hurt, since it's just another way to query it(we will still use ext.monitoring_manager and ext.network_manager where needed). But I am not sure if the cascade delete should work the same for all of them?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh looks good @durandom I must have missed the :foreign_key definitions there.

@durandom
Copy link
Member Author

@zeari @jrafanie please have a look

@durandom
Copy link
Member Author

@blomquisg does it make sense from a UI / Business perspective to delete all dependent managers? Could we have a case where the cloudmanager is about to be deleted, but e.g. a storage manger should be kept?

@zeari
Copy link

zeari commented Jul 23, 2017

@blomquisg does it make sense from a UI / Business perspective to delete all dependent managers? Could we have a case where the cloudmanager is about to be deleted, but e.g. a storage manger should be kept?

I dont think so, but i could be wrong. I would solve this by changing the relation to :dependent => destroy and having disable! run recursively through the child managers, disabling them as well. The next time orchestrate_destroy comes out of the queue, all workers related to all managers (children and parent) would be off and destroying the parent manager would destroy the children managers.

cc @moolitayer

@durandom
Copy link
Member Author

changing the relation to :dependent => destroy

this is already the case, via e.g. the has_network_manager_mixin.

@zeari why not calling destroy_queue as I did? This method is the entry point for all orchestrated destroy stuff, like disabling and re-scheduling etc.

@zeari
Copy link

zeari commented Jul 24, 2017

@zeari why not calling destroy_queue as I did? This method is the entry point for all orchestrated destroy stuff, like disabling and re-scheduling etc.

This way should work fine.
I think its a little more efficient to destroy everything in one queued action by destroying the parent instead of queueing and destroying each manager separately. (But it might be a trivial optimization)

@durandom
Copy link
Member Author

@blomquisg could you have a look if that makes sense from a user perspective?

@durandom
Copy link
Member Author

@agrare could you have a look at this one?

@@ -442,6 +444,9 @@ def self.schedule_destroy_queue(id, deliver_on = nil)
:method_name => "orchestrate_destroy",
:deliver_on => deliver_on,
)
find(id).child_managers.each do |child_manager|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about doing this in the instance method so you can skip the find(id) and just have access to child_managers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You would think def destroy_queue is called, but alas not The UI calls it on the model. And as the instance method also forwards it to the class method, it's safer to do it here.

@@ -442,6 +444,9 @@ def self.schedule_destroy_queue(id, deliver_on = nil)
:method_name => "orchestrate_destroy",
:deliver_on => deliver_on,
)
find(id).child_managers.each do |child_manager|
child_manager.destroy_queue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you do need to do this here should we honor the deliver_on for the child managers as well?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, thinking about it: The canonical entrypoint seems to be self.destroy_queue or the instance version. And there is no deliver_on. The deliver_on is only used by orchestrate_destroy to throttle. So, I'd actually leave it like this?!
But I can change it if you think it's better for eventualities or consistency...

@durandom durandom force-pushed the orchestrate_destroy branch 2 times, most recently from 45c8fde to 4877c7e Compare August 30, 2017 09:31
With orchestrated destroy of managers we wait until all workers
are shutdown before actually destroying the ems.

This means, all dependent workers, like network_manager, must
be destroyed as well before we can destroy the cloud manager

This queues up destroy of all child_managers
@durandom durandom force-pushed the orchestrate_destroy branch from 4877c7e to 756d5b3 Compare August 30, 2017 09:33
@durandom
Copy link
Member Author

@agrare please have a look again.
I moved the scheduling of destroy for child_managers out of schedule_destroy_queue because this gets called repeatedly if destroy fails.
The class method destroy_queue now just delegates to the instance method, which in turn calls schedule_destroy.

@miq-bot
Copy link
Member

miq-bot commented Aug 30, 2017

Checked commit durandom@756d5b3 with ruby 2.2.6, rubocop 0.47.1, and haml-lint 0.20.0
2 files checked, 0 offenses detected
Everything looks fine. 👍

@djberg96
Copy link
Contributor

djberg96 commented Sep 5, 2017

FWIW, looks good to me. 👍

Copy link
Member

@agrare agrare left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 looks a lot cleaner @durandom nice!

@agrare agrare merged commit 9d413e2 into ManageIQ:master Sep 5, 2017
@agrare agrare added this to the Sprint 69 Ending Sep 18, 2017 milestone Sep 5, 2017
@durandom durandom deleted the orchestrate_destroy branch September 5, 2017 19:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants