-
Notifications
You must be signed in to change notification settings - Fork 356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When available, use Mistral workflows to scale stacks #1533
Conversation
This looks good in principle! Scaling down a node doesn't require updating the heat parameters? Does the delete_node workflow take care of that? |
@tzumainn, heat parameters update is not required. delete_node takes care
of it, and we just need to pass in a list of nodes to remove.
…On Thu, Jun 15, 2017 at 6:25 AM, tzumainn ***@***.***> wrote:
This looks good in principle! Scaling down a node doesn't require updating
the heat parameters? Does the delete_node workflow take care of that?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1533 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA7eQ7_Pj7gkkpBM9fv_zXgLenDJ8F-0ks5sETDEgaJpZM4N30UF>
.
|
de53baf
to
3dd79a0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me!
@h-kataria can this be merged? |
ed7ad42
to
3f6092e
Compare
@martinpovolny all fixed. Can you take a look? |
@martinpovolny the rubocop issues have been fixed. Can this be merged? |
Guys, the code looks good. Sure. But there's a fundamental problem with the approach.
This looks like a business logic. Based on some condition:
you are deciding how a particular operation should be accomplished. I can't see why this would live in the frontend. I am sorry, but if there's any reason for this to be in the frontend, I need to hear it from the core guys. Ping @Ladas, @blomquisg |
@rwsu @tzumainn all of this should be done in the provider stack code, the only thing the UI controller should do is queue the action for the stack instance. Also |
the UI appliance isn't guaranteed to be able to route to the provider API It's actually a supported usecase where it CANNOT route there. We had this discussion a couple of times already ;-) |
Ah, that makes sense. @rwsu can we move that business logic into a unified scaling method in the backend? |
@martinpovolny @h-kataria I don't have commit rights to this repo. So, if this PR has to be closed, merged, or whatever, I'm not going to be able to deal with it. |
@blomquisg : I asked for your expertise, but all is resolved already. @rwsu : do you prefer carrying on in this PR or will you open a new one? Thanks and sorry for the trouble. |
Modifies update_stack to queue tasks to scale up or down stacks using Mistral if the workflows are available. Workflows are available starting with OSP10. For OSP9 and older environments, the existing behavior of updating the stack directly is used. Depends on ManageIQ/manageiq-providers-openstack#55
or expect_any_instance_of
3f6092e
to
03f50d2
Compare
Moved calls hitting the provider into queued methods under the orchestration_stack model. The controller now simply calls the queue methods. Depends on ManageIQ/manageiq-providers-openstack#117
03f50d2
to
1488c9b
Compare
Checked commits rwsu/manageiq-ui-classic@f97c26d~...1488c9b with ruby 2.3.3, rubocop 0.47.1, and haml-lint 0.20.0 |
Hi @martinpovolny, I've moved the calls that were hitting the OpenStack API into the orchestration stack model (ManageIQ/manageiq-providers-openstack#117). Does this look better? |
@martinpovolny could someone from the UI team review whether this needs any further changes? |
Modifies update_stack to queue tasks to scale up or down stacks
using Mistral if the workflows are available. Workflows are
available starting with OSP10.
For OSP9 and older environments, the existing behavior of updating
the stack directly is used.
Depends on ManageIQ/manageiq-providers-openstack#55