You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yesterday we had one node in our cluster completely partition off from the rest of the cluster.
Our applications written with hector suffered downtime
from "HUnavailableException: : May not be enough replicas present to handle consistency level."
Request consistency level was ONE. But connecting to the partitioned off node as the coordinator meant that replica were often on other unreachable nodes. This explains the error message above.
Looking into HConnectionManager.operateWithFailover(..) it's clear that any HUnavailableException is immediately thrown. This makes sense when the coordinator is part of the healthy cluster and the replica required are on down nodes since no future request to other healthy coordinators are going to get around the problem that those replica are down. But it doesn't make sense when the coordinator is a node partitioned away from the cluster, as a future request will go to a different coordinator, likely healthy, and find required replica.
I haven't supplied any patch / pr as i'm unsure as to how this should be handled…
The text was updated successfully, but these errors were encountered:
Yesterday we had one node in our cluster completely partition off from the rest of the cluster.
Our applications written with hector suffered downtime
from "HUnavailableException: : May not be enough replicas present to handle consistency level."
Request consistency level was ONE. But connecting to the partitioned off node as the coordinator meant that replica were often on other unreachable nodes. This explains the error message above.
Looking into HConnectionManager.operateWithFailover(..) it's clear that any HUnavailableException is immediately thrown. This makes sense when the coordinator is part of the healthy cluster and the replica required are on down nodes since no future request to other healthy coordinators are going to get around the problem that those replica are down. But it doesn't make sense when the coordinator is a node partitioned away from the cluster, as a future request will go to a different coordinator, likely healthy, and find required replica.
I haven't supplied any patch / pr as i'm unsure as to how this should be handled…
The text was updated successfully, but these errors were encountered: