You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most of the provider workers (e.g. RefreshWorker, EventCatcher, OperationsWorker) are of the PerEmsWorker variety. With providers that have multiple managers (e.g. Amazon has Cloud, Network, 2xStorage Managers) this leads to an "explosion" of workers.
Most of the time these managers are talking to the same API so it really doesn't make sense to split them up when they are talking to the provider.
We already do this for some RefreshWorkers because there were race conditions when doing targeted refreshes since inventory from the network manager depended on cross-links to the cloud manager.
Consolidate the Event Catchers. We don't need e.g. an OpenStack CloudManager and NetworkManager event catcher when they talk to the same endpoint and have the same events delivered to both
Consolidate the Refreshes. Even when the RefreshWorkers are consolidated the different managers still queue refreshes for each other. A full refresh for e.g. the AWS provider could collect, parse, and save all of the managers together (note this is already done for targeted). This saves on duplicate API calls and should be faster taking the queue out of the middle
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not been updated for at least 3 months.
If you can still reproduce this issue on the current release or on master, please reply with all of the information you have about it in order to keep the issue open.
Thank you for all your contributions! More information about the ManageIQ triage process can be found in the triage process documentation.
Most of the provider workers (e.g. RefreshWorker, EventCatcher, OperationsWorker) are of the PerEmsWorker variety. With providers that have multiple managers (e.g. Amazon has Cloud, Network, 2xStorage Managers) this leads to an "explosion" of workers.
Most of the time these managers are talking to the same API so it really doesn't make sense to split them up when they are talking to the provider.
We already do this for some RefreshWorkers because there were race conditions when doing targeted refreshes since inventory from the network manager depended on cross-links to the cloud manager.
We can further consolidate:
The text was updated successfully, but these errors were encountered: