-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3-node cluster without any zones? #97
Comments
Further to this, I've just tried this on a k3d cluster and can confirm that it doesn't take the kubernetes host into consideration at all - you can see that node 0 wasn't even selected for any scheduling, and the controller still places replicas onto the same kubernetes node
I've got the repo cloned and will give fixing this a go so will follow up with any questions |
@chriswiggins did you have the If that doesn't work, could you try adding the zone label with the same value to all your nodes? |
Hey @4n4nd - just tried setting that to no avail:
Anything else you can think of? |
@chriswiggins hmm this is weird. The pods are scheduled by k8s and not the operator and there are no pods scheduled in your 3rd worker node. |
Very true - that was weird but still thats the scheduler doing its thing. I tried updating the
If I set a different zone name for each node, it all ends up balancing itself as expected:
Based on this, the
|
Hi there,
Just reading through the source and relevant issues in here to try and determine the node selection criteria when creating replicas. We run a 3-node Kubernetes cluster (with redis running outside the cluster currently) but are looking to move this into this operator.
From what I can gather, it looks like the replica placement is based on the zone topology key - what happens in a 3-node cluster, where there is no such thing as zones? Is the controller smart enough to not attach a replica to a primary on the same node? Obviously this would be undesired behaviour as a node going down would take the replica with it.
Happy to make a PR if pointed in the right direction!
The text was updated successfully, but these errors were encountered: