Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Worker consolidation for multiple managers #19950

Open
1 of 3 tasks
agrare opened this issue Mar 11, 2020 · 1 comment
Open
1 of 3 tasks

Worker consolidation for multiple managers #19950

agrare opened this issue Mar 11, 2020 · 1 comment

Comments

@agrare
Copy link
Member

agrare commented Mar 11, 2020

Most of the provider workers (e.g. RefreshWorker, EventCatcher, OperationsWorker) are of the PerEmsWorker variety. With providers that have multiple managers (e.g. Amazon has Cloud, Network, 2xStorage Managers) this leads to an "explosion" of workers.

Most of the time these managers are talking to the same API so it really doesn't make sense to split them up when they are talking to the provider.

We already do this for some RefreshWorkers because there were race conditions when doing targeted refreshes since inventory from the network manager depended on cross-links to the cloud manager.

We can further consolidate:

  • Finish consolidating the Refresh Workers. Currently only providers with targeted refresh have these consolidated but they all should be
  • Consolidate the Event Catchers. We don't need e.g. an OpenStack CloudManager and NetworkManager event catcher when they talk to the same endpoint and have the same events delivered to both
  • Consolidate the Refreshes. Even when the RefreshWorkers are consolidated the different managers still queue refreshes for each other. A full refresh for e.g. the AWS provider could collect, parse, and save all of the managers together (note this is already done for targeted). This saves on duplicate API calls and should be faster taking the queue out of the middle
@miq-bot
Copy link
Member

miq-bot commented Feb 27, 2023

This issue has been automatically marked as stale because it has not been updated for at least 3 months.

If you can still reproduce this issue on the current release or on master, please reply with all of the information you have about it in order to keep the issue open.

Thank you for all your contributions! More information about the ManageIQ triage process can be found in the triage process documentation.

@Fryguy Fryguy removed the stale label Mar 2, 2023
@Fryguy Fryguy added this to Roadmap Jun 12, 2024
@Fryguy Fryguy moved this to Backlog in Roadmap Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Backlog
Development

No branches or pull requests

3 participants