-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement adaptive replica selection #26128
Implement adaptive replica selection #26128
Conversation
This implements the selection algorithm described in the C3 paper for determining which copy of the data a query should be routed to. By using the service time EWMA, response time EWMA, and queue size EWMA we calculate the score of a node by piggybacking these metrics with each search request. Since Elasticsearch lacks the "broadcast to every copy" behavior that Cassandra has (as mentioned in the C3 paper) to update metrics after a node has been highly weighted, this implementation adjusts a node's response stats using the average of the its own and the "best" node's metrics. This is so that a long GC or other activity that may cause a node's rank to increase dramatically does not permanently keep a node from having requests routed to it, instead it will eventually lower its score back to the realm where it is a potential candidate for new queries. This feature is off by default and can be turned on with the dynamic setting `cluster.routing.use_adaptive_replica_selection`. Relates to elastic#24915, however instead of `b=3` I used `b=4` (after benchmarking)
This is still missing tests (I wanted to verify the validity of the feature before adding them, hence the benchmarking before writing them). Here is a short summary of the benchmarking results: This uses six machines running on Google Compute Engine, each machine has 16 Each test is 40,000 queries per lap, with 5 laps for a total of 200,000 queries. It consists for 4 different benchmarks:
For the "load" scenarios, load was introduced on the 1 replica, no load summary
Without any load, the median throughput and latency has roughly a 1% difference 1 replica, with load summary
So a trade of 50th percentile latency for a large reduction in 90th/99th You can see the distribution of requests with ARS, requests were routed away
In the non-ARS scenario, the requests are evenly distributed. 4 replicas, no load summary
ARS improves on both throughput and latency, for all of the percentiles. Additionally, the requests are routed roughly evenly, even though round robin 4 replicas, with load summary
So an improvement in all latencies, while increasing the median throughput As the number of replicas goes up, I expect the adaptive replica selection to be Round robin requests summaryOne more test, instead of hitting the client node this time, I had Rally hit all
So still an improvement on throughput and latency for all categories. ConclusionFinal summary, we want to see a high throughput improvement and a negative
The adaptive replica selection shows an improvement for almost all tests in Full benchmarks at https://writequit.org/org/es/design/adaptive-replica-selection-benchmarks.html |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome lee, I left a question!
@@ -284,8 +285,9 @@ private void executeSearch(SearchTask task, SearchTimeProvider timeProvider, Sea | |||
for (int i = 0; i < indices.length; i++) { | |||
concreteIndices[i] = indices[i].getName(); | |||
} | |||
Map<String, Long> nodeSearchCounts = searchTransportService.getPendingRequests(SearchAction.NAME); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is puzzeling to me, why are you useing the SearchAction.NAME
it's the action that acts as a coordinator and we don't necessarily run this from within the cluster so the counts are expected to be 0 for almost all nodes? I wonder if that should be the shard actions here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right! I think instead it should be the prefix of "indices:data/read/search" so that all of these match it.
I'll make that change and then re-run the benchmarks to see if it affects things, it may mean we can lower b=4
back down to b=3
, good catch!
* selection forumla. Making sure though that its random within the active shards of the same | ||
* (or missing) rank, and initializing shards are the last to iterate through. | ||
*/ | ||
public ShardIterator rankedActiveInitializingShardsIt(@Nullable ResponseCollectorService collector, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
down the road we really need to work on the number of objects being created in this process. It can totally be a followup but I think we can abstract it away quite nicely since it's all keyed by the node id and the set of nodes is static. We can use a bytesrefhash with parallel arrays in the future that also prevents all the boxing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I totally agree, it was even at the point where it was very elegantly implemented using streams, however the streams were too slow compared to their imperative counterparts, so it's definitely something I'd like to address in the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: the other impl is called activeInitializingShardsRandomIt
so maybe we should rename this one to activeInitializingShardsRankedIt
or rename the other one to randomActiveInitializingShardsIt
?
Thanks for taking a look @s1monw! I pushed a commit to change the pending requests calculation (thanks for catching that!) and re-ran the benchmarks to make sure it didn't change anything: Single replica, non-loaded case:
So again, not a huge latency difference, as expected for the unloaded cluster. Single replica,
And again, a large improvement in throughput for the loaded case as well as a trade-off of 50th percentile latency for a large improvement in 90th and 99th percentile latency. Single replica, round robin requests:
Again a nice improvement in both throughput and latency for the non-stressed round-robin test case. Looks like it keeps the same improvement. I also dropped |
These are really impressinve numbers. Very good job! I am not sure I understand what is meant by "Four replicas with no load applied". Can you describe the scenario? |
This looks very promising! I don't understand the setup well, but why is ARS helping when there is only one replica of the data? |
Sure, so there are 5 data nodes, and for the index in question, there is 1 The "no load" part is that none of the nodes were artificially stressed.
For the 4 replica no load case the throughput increase is 11% and the latency For the one with a median latency increase, do you mean the 1 replica with load
Even when there is only one replica, we can still rank both of the nodes that |
Ah, I see. So one replica means two copies of the data. That's what I'd missed. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left another round of comments. Lemme know if we should just go with this impl or make it more efficient. I think we can iterate on it once it's in?
/** | ||
* Return a map of nodeId to pending number of requests for the given action name prefix | ||
*/ | ||
public Map<String, Long> getPendingRequests(final String actionNamePrefix) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems to be a very expensive operation I wonder if we should special case this here rather than adding a generic way of doing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we can keep a map inside SearchTransportService
that is basically passed to every relevant request as an action listener. I think we can just keep things in the map until the counter goes to 0 and then we remove it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something like this? https://gist.github.com/dakrone/e51881e25aaa2a9bd548465d08fe9162
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks better to me but I think I'd rather like a Map<String, AtomicLong>, ideally pre-filled with every possible action name so that the map is effectively immutable afterwards and concurrency is only handled at the AtomicLong level? It would also create fewer boxed longs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ideally pre-filled with every possible action name so that the map is effectively immutable afterwards and concurrency is only handled at the AtomicLong level? It would also create fewer boxed longs.
This is a map of nodeId
to connectionCount
, I'm not sure how pre-filling it with possible action names would help?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I got confused about what keys were about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to your suggested patch. I'm just confused why it puts 0 as a default value when it sees a node id for the first time, should it be 1? Similarly it should remove entries from the map in handleResponse when the count is 1 rather than 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yes, good call! I'll make those changes before pushing the commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's remove the actionNamePrefix
argument which is ignored?
* selection forumla. Making sure though that its random within the active shards of the same | ||
* (or missing) rank, and initializing shards are the last to iterate through. | ||
*/ | ||
public ShardIterator rankedActiveInitializingShardsIt(@Nullable ResponseCollectorService collector, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++
@@ -43,13 +45,24 @@ | |||
|
|||
public class OperationRouting extends AbstractComponent { | |||
|
|||
public static final Setting<Boolean> USE_ADAPTIVE_REPLICA_SELECTION_SETTING = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this is false we should not collect any statistics in the SearchTransportService either no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should continue to collect the stats, especially since all of them are moving averages it's good to be able to turn ARS on and not have the numbers be wildly inaccurate. What do you think? I could go either way, though I think toggling the collection and and off is going to be more complex
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to keep collecting
Yes, big +1 to iterating on this once it's in. I'm wary of making a lot of performance-related changes to this version of it, since it invalidates the benchmarks once I make a change, I'd rather get it in with these numbers, then work on the efficiency aspect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks very interesting and benchmarks suggest this will be a great win! Some suggestions:
- We should fix getPendingRequests to no longer run in linear time with the number of pending requests like you already started looking into.
- I think readability would improve significantly if we unwrapped keys and values from
Map.Entry
objects and gave them meaningful names. - Could you add a link to the paper as a source comment. I think it is also important to add comments in places where you are not exactly following the paper recommendations (such as using the average with the best node), or where you think we might want to follow a different route.
Also if my understanding is correct, ranks only depend on the node that hosts a shard. So say that you have 2 nodes, 10 shards and 1 replica. Each node has an entire copy of the index: 10 shards and since the decision process only depends on the node statistics then it means that a given request will be served by only one of the two nodes, am I correct? If yes, I think this is problematic as it will make the situation worse for users who have a low throughput but care about latency?
for (Map.Entry<String, Optional<ResponseCollectorService.ComputedNodeStats>> entry : nodeStats.entrySet()) { | ||
if (entry.getValue().isPresent()) { | ||
ResponseCollectorService.ComputedNodeStats stats = entry.getValue().get(); | ||
double rank = stats.rank(nodeSearchCounts.getOrDefault(entry.getKey(), 1L)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd find it easier to read if you pulled entry.getKey and entry.getValue into their own variables with meaningful names
*/ | ||
public Map<String, Long> getPendingRequests(final String actionNamePrefix) { | ||
Map<String, Long> nodeCounts = new HashMap<>(); | ||
for (Map.Entry<Long, RequestHolder> entry : clientHandlers.entrySet()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems like you could iterate over values directly
/** | ||
* Return a map of nodeId to pending number of requests for the given action name prefix | ||
*/ | ||
public Map<String, Long> getPendingRequests(final String actionNamePrefix) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks better to me but I think I'd rather like a Map<String, AtomicLong>, ideally pre-filled with every possible action name so that the map is effectively immutable afterwards and concurrency is only handled at the AtomicLong level? It would also create fewer boxed longs.
Thanks for taking another look @jpountz, I pushed commits addressing all of your comments! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks good to me overall. One thing I'd like to discuss is the behaviour of this option in the case that you don't have concurrent requests (see last paragraph of my previous comment) since shard requests will all go to the node that hosts the shard and has the lower rank. So to take an extreme example, if all nodes have a copy of all shards, then all shard requests will go to a single node: the one that has the best score.
I'm wondering whether we could fix it eg. by artificially increasing the number of connections to the best node in OperationRouting.searchShards
in order to simulate that we are just about to send a request to this node. Something like that (oversimplified):
for (IndexShardRoutingTable shard : shards) {
ShardIterator iterator = preferenceActiveShardIterator(..., nodeCounts);
String firstShardNode = iterator.getShardRoutings().get(0).currentNodeId();
nodeCounts[firstShardNode] += 1;
}
This is just a random idea in order to get the discussion started but I'm curious what you think about this issue.
/** | ||
* Return a map of nodeId to pending number of search requests | ||
*/ | ||
public Map<String, Long> getPendingSearchRequests() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a thought: this is going to return a "live" map, so getting the same entry twice in a row could return different counts. This does not seem to have the potential to cause bugs today, but I'm wondering whether we should take a snapshot instead in order to make it easier to reason about?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively I'd also be fine with documenting it here and in all methods that take this live node connection count.
super.handleResponse(response); | ||
// Decrement the number of connections or remove it entirely if there are no more connections | ||
// We need to remove the entry here so we don't leak when nodes go away forever | ||
clientConnections.computeIfPresent(nodeId, (id, conns) -> conns.longValue() == 1 ? null : conns - 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: might be worth asserting that the current value is not null (using compute instead of computeIfPresent) and gte 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no way the value can be null in this case? We only modify it in the three places all using compute*
, and compute
as well as computeIfPresent
prevent null values in the map? When we expose the map it's also made an unmodifiableMap
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is why I said moving to compute instead of computeIfPresent so that we could assert that we do have a mapping for nodeId in that map at that point. To be clear I think that what you did is correct, I'd just like to add assertions to it to make sure the invariant is respected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I added an assert for this in 8674249
final ResponseCollectorService.ComputedNodeStats stats = maybeStats.get(); | ||
final int updatedQueue = (minStats.queueSize + stats.queueSize) / 2; | ||
final long updatedResponse = (long) (minStats.responseTime + stats.responseTime) / 2; | ||
final long updatedService = (long) (minStats.serviceTime + stats.serviceTime) / 2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: the casts should not be necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are required, without them you get error: incompatible types: possible lossy conversion from double to long
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, I had not realized we stored those times as doubles
This is indeed something where the behavior is a little different:
Consider four nodes,
After 8 serial requests, the request distribution is: d1=2, d2=4, d3=2 Yes, it is possible to conceive of a situation where serial requests go to only I would posit that if the requests all went to one node, and if that node's |
Just had a quick discussion with Lee to clarify my concerns. Commenting here as well for reference.
I think it matters. If you have more shards than cores per node, or if you have spinning disks that are not good at reading multiple locations concurrently, then querying a single node instead of distributing the query across multiple nodes is going to result in worse latencies.
This is not what I am worried about. I actually think the algorithm and your idea to average metrics with those of the best node are going to do a good job at distributing the load based on the respective recent performance of nodes. My concern is about the fact that for a single search request, this algorithm will generally query fewer nodes than round-robin, which can be an issue for latency for the aforementioned reasons. |
…ank when chosen When a node gets chosen, this increases the number of search counts for the winning node so that it will not be as likely to be chosen again for non-concurrent search requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
} else { | ||
// One or both of the nodes don't have stats, treat them as equal | ||
return 0; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering that the fact that the order is not transitive could confuse sorting. For instance if you have s1 and s2 so that s1 < s2 and s3 which is null then s1 and s2 are both equal to s3 but not equal with each other. Maybe we should make nulls always compare less than non nulls so that the order is total?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's okay to keep the contract of treating situations where both nodes do not have stats as equal, I also expect it to be a very very tiny margin of requests since null
stats only occurs on a brand new node with 0 prior searches
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually javadocs of Arrays.sort that that an IllegalArgumentException may be thrown if the comparator violates the Comparator contract and the Comparator javadocs say that it must implement a total ordering so I think it's important to make nulls compare consistently less than or greater than non-null values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, it's not strictly nulls (since the nodes do exists, their Optional
s are just empty), but I understand what you're saying. I'll change this to make missing values compare consistently less
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doh they are nulls, I originally wrote it with Optional
but it's different now, sorry!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I pushed 3d1dd2b for this
@@ -58,6 +58,7 @@ | |||
import java.net.UnknownHostException; | |||
import java.util.Arrays; | |||
import java.util.Collections; | |||
import java.util.HashMap; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leftover?
|
||
// adjust the non-winner nodes' stats so they will get a chance to receive queries | ||
adjustStats(collector, nodeStats, minNode, minStats); | ||
if (minNode != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there are ties, then minNode might not be the node that we first try to send the shard request to. Should we update stats of the node id of the first shard of sortedShards
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did make this change, to update the first stats of sortedShards
, it also turns out to be more efficient because I don't have to loop through everything twice!
// it only affects the captured node search counts, which is | ||
// captured once for each query in TransportSearchAction | ||
nodeSearchCounts.compute(minNode, (id, conns) -> conns == null ? 1 : conns + 1); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think it's confusion that a method called rank
has side-effects (adjustStats, increase node counts). Maybe split the application of side-effects into a separate method or rename it to make it clear it will update stats?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I'll rename this
* selection forumla. Making sure though that its random within the active shards of the same | ||
* (or missing) rank, and initializing shards are the last to iterate through. | ||
*/ | ||
public ShardIterator rankedActiveInitializingShardsIt(@Nullable ResponseCollectorService collector, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: the other impl is called activeInitializingShardsRandomIt
so maybe we should rename this one to activeInitializingShardsRankedIt
or rename the other one to randomActiveInitializingShardsIt
?
@@ -43,13 +45,24 @@ | |||
|
|||
public class OperationRouting extends AbstractComponent { | |||
|
|||
public static final Setting<Boolean> USE_ADAPTIVE_REPLICA_SELECTION_SETTING = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to keep collecting
@jpountz thanks! I pushed commits for your feedback and just re-ran the benchmarks to ensure it didn't make any appreciable difference. The numbers for different tests are the same as the prior benchmarking or slightly better! (I updated the big benchmarks page) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I'm looking forward to getting feedback from users about this feature!
@elasticmachine retest this please |
* Implement adaptive replica selection This implements the selection algorithm described in the C3 paper for determining which copy of the data a query should be routed to. By using the service time EWMA, response time EWMA, and queue size EWMA we calculate the score of a node by piggybacking these metrics with each search request. Since Elasticsearch lacks the "broadcast to every copy" behavior that Cassandra has (as mentioned in the C3 paper) to update metrics after a node has been highly weighted, this implementation adjusts a node's response stats using the average of the its own and the "best" node's metrics. This is so that a long GC or other activity that may cause a node's rank to increase dramatically does not permanently keep a node from having requests routed to it, instead it will eventually lower its score back to the realm where it is a potential candidate for new queries. This feature is off by default and can be turned on with the dynamic setting `cluster.routing.use_adaptive_replica_selection`. Relates to #24915, however instead of `b=3` I used `b=4` (after benchmarking) * Randomly use adaptive replica selection for internal test cluster * Use an action name *prefix* for retrieving pending requests * Add unit test for replica selection * don't use adaptive replica selection in SearchPreferenceIT * Track client connections in a SearchTransportService instead of TransportService * Bind `entry` pieces in local variables * Add javadoc link to C3 paper and javadocs for stat adjustments * Bind entry's key and value to local variables * Remove unneeded actionNamePrefix parameter * Use conns.longValue() instead of cached Long * Add comments about removing entries from the map * Pull out bindings for `entry` in IndexShardRoutingTable * Use .compareTo instead of manually comparing * add assert for connections not being null and gte to 1 * Copy map for pending search connections instead of "live" map * Increase the number of pending search requests used for calculating rank when chosen When a node gets chosen, this increases the number of search counts for the winning node so that it will not be as likely to be chosen again for non-concurrent search requests. * Remove unused HashMap import * Rename rank -> rankShardsAndUpdateStats * Rename rankedActiveInitializingShardsIt -> activeInitializingShardsRankedIt * Instead of precalculating winning node, use "winning" shard from ranked list * Sort null ranked nodes before nodes that have a rank
* master: Allow abort of bulk items before processing (elastic#26434) [Tests] Improve testing of FieldSortBuilder (elastic#26437) Upgrade to lucene-7.0.0-snapshot-d94a5f0. (elastic#26441) Implement adaptive replica selection (elastic#26128) Build: Quiet bwc build output (elastic#26430) Migrate Search requests to use Writeable reading strategies (elastic#26428) Changed version from 7.0.0-alpha1 to 6.1.0 in the nested sorting serialization check. Remove dead path conf BWC code in build
This implements the selection algorithm described in the C3 paper for
determining which copy of the data a query should be routed to.
By using the service time EWMA, response time EWMA, and queue size EWMA we
calculate the score of a node by piggybacking these metrics with each search
request.
Since Elasticsearch lacks the "broadcast to every copy" behavior that Cassandra
has (as mentioned in the C3 paper) to update metrics after a node has been
highly weighted, this implementation adjusts a node's response stats using the
average of the its own and the "best" node's metrics. This is so that a long GC
or other activity that may cause a node's rank to increase dramatically does not
permanently keep a node from having requests routed to it, instead it will
eventually lower its score back to the realm where it is a potential candidate
for new queries.
This feature is off by default and can be turned on with the dynamic setting
cluster.routing.use_adaptive_replica_selection
.Relates to #24915,
however instead ofb=3
I usedb=4
(after benchmarking)