-
Notifications
You must be signed in to change notification settings - Fork 945
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wish: throttle "restartall" #603
Comments
@markstos interesting concept. Seems like maybe a --limit option on restartall? this would execute for-each-limit type functionality rather than restart them all at once. |
@jcrugzz Something like that sounds right. To me, something like |
@markstos I think the best about for this would be as a configuration value and not a CLI option. e.g.
|
That would be OK with me too. I can see why a config value might be Thanks for considering this!
On Thu, Sep 11, 2014 at 4:07 AM, Charlie Robbins [email protected]
|
I have the same situation, my apps are running like a queue, for example:
So, i think maybe all the processes could be sorted by index when you invoke |
I'm not sure what happened to the commit above. Here's an alternate pull request which implements just the rolling-restart request: |
Could I have some updated feedback on the PR I proposed above. It proposes always using a rolling restart for the "restart" cases with multiple nodes, to avoid downtime when all the nodes have stopped, but none have started yet. This mirrors the zero downtime upgrade approach that Nginx uses. I'll note that Nginx does not offer configuration options for how a restart happens. The rolling zero-downtime approach is the only option. I recommend emulating Nginx's successful design here. If you really want to take the entire cluster offline before bring up workers with a new configuration, there is already an option to do this, just like there is in Nginx: Just issue a |
I'll note that the StrongLoop Also note this chart comparing
Implementing the rolling-start as "reload" instead of "restart" as option, but I think it could just be considered an improvement to "restart", without requiring people to figure which of 4 different options for restarting their app is best. |
I have several produces managed by
forever
which work as a pool. The reverse proxy in front of them can gracefully handle some members of the pool being offline.However, it appears that when 'forever restartall' is used, all my node instances are going offline at once, and the app temporarily becomes unavailable. This behavior is nice in some ways in that it's fast and keeps all the nodes together in a consistent state.
Still, it would be nice to have an option for a throttled version of restartall that restarts one at a time, perhaps with a pause.
As a workaround, I could write a script to query
forever list
and then callforever restart
on the node instances one by one.The text was updated successfully, but these errors were encountered: