-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split brain issue #34
Comments
Hi sorry for the long response time. I've had some personal stuff recently. In theory if you could start literally 20 nodes at once that could theoretically result in a split brain in the configuration, yes. At the same time though I don't know of a way that would work in practice. As long as you have existing nodes in the cluster they will pick up the new members and add them and migrate, but that won't happen all at the same time just due to the way that puppet would not be able to run them all with the same timing. Have you run into an issue specifically with this? I can also try to test something myself. |
I tested myself and split brain happens in 99% of cases :( On 4 Mar 2016 01:21 +0100, Justice [email protected], wrote:
|
Huh. Weird. I'll look into it some more. |
I think this is the key point:
In the case of spawning a completely new cluster (not adding to an existing one with nodes already) with 20 new VMs this is very likely to happen as the VMs come up simultaneously. |
@dfairhurst Fair enough. I'll work on engineering a solution for that particular problem. |
Thoughts on waiting random |
Good idea! I'll consider how to best implement this that won't fall afoul of timeouts for exec, etc. |
Well, assuming this actually works, what about adding it to the
|
Also worth noting:
I was originally looking for a |
If I would simultaneously start 20 nodes each applying this module with same cluster name, is there a chance that I will get split cluster issue? After going through source code it seems like nothing would stop coucbase from doing it.
The text was updated successfully, but these errors were encountered: