Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement adaptive replica selection #26128

Merged
merged 25 commits into from
Aug 31, 2017
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
bbeda3c
Implement adaptive replica selection
dakrone Jun 29, 2017
adc6f43
Randomly use adaptive replica selection for internal test cluster
dakrone Aug 10, 2017
62fe599
Use an action name *prefix* for retrieving pending requests
dakrone Aug 11, 2017
34d38e4
Add unit test for replica selection
dakrone Aug 14, 2017
e16754d
don't use adaptive replica selection in SearchPreferenceIT
dakrone Aug 14, 2017
c7122f8
Track client connections in a SearchTransportService instead of Trans…
dakrone Aug 21, 2017
2ba499e
Bind `entry` pieces in local variables
dakrone Aug 22, 2017
4f60243
Add javadoc link to C3 paper and javadocs for stat adjustments
dakrone Aug 22, 2017
b3dc3d9
Bind entry's key and value to local variables
dakrone Aug 22, 2017
205d78f
Merge remote-tracking branch 'origin/master' into adaptive-replica-se…
dakrone Aug 22, 2017
a41c75a
Remove unneeded actionNamePrefix parameter
dakrone Aug 22, 2017
546f5fb
Use conns.longValue() instead of cached Long
dakrone Aug 22, 2017
91f7a12
Add comments about removing entries from the map
dakrone Aug 22, 2017
3ddc0ac
Pull out bindings for `entry` in IndexShardRoutingTable
dakrone Aug 22, 2017
a451763
Use .compareTo instead of manually comparing
dakrone Aug 22, 2017
8674249
add assert for connections not being null and gte to 1
dakrone Aug 23, 2017
fc59d53
Copy map for pending search connections instead of "live" map
dakrone Aug 23, 2017
1beca3b
Merge remote-tracking branch 'origin/master' into adaptive-replica-se…
dakrone Aug 28, 2017
e945a5d
Increase the number of pending search requests used for calculating r…
dakrone Aug 29, 2017
6df44f4
Remove unused HashMap import
dakrone Aug 30, 2017
62747bf
Rename rank -> rankShardsAndUpdateStats
dakrone Aug 30, 2017
e289581
Rename rankedActiveInitializingShardsIt -> activeInitializingShardsRa…
dakrone Aug 30, 2017
dcae338
Instead of precalculating winning node, use "winning" shard from rank…
dakrone Aug 30, 2017
3d1dd2b
Sort null ranked nodes before nodes that have a rank
dakrone Aug 30, 2017
9c55c64
Merge remote-tracking branch 'origin/master' into adaptive-replica-se…
dakrone Aug 31, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ public void onResponse(SearchPhaseResult response) {
final int queueSize = queryResult.nodeQueueSize();
final long responseDuration = System.nanoTime() - startNanos;
// EWMA/queue size may be -1 if the query node doesn't support capturing it
if (serviceTimeEWMA > 0 && queueSize > 0) {
if (serviceTimeEWMA > 0 && queueSize >= 0) {
collector.addNodeStatistics(nodeId, queueSize, responseDuration, serviceTimeEWMA);
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@

import java.io.IOException;
import java.io.UncheckedIOException;
import java.util.Map;
import java.util.function.BiFunction;
import java.util.function.Supplier;

Expand Down Expand Up @@ -193,6 +194,13 @@ public RemoteClusterService getRemoteClusterService() {
return transportService.getRemoteClusterService();
}

/**
* Return a map of nodeId to pending number of requests for the given action name
*/
public Map<String, Long> getPendingRequests(final String actionName) {
return transportService.getPendingRequests(actionName);
}

static class ScrollFreeContextRequest extends TransportRequest {
private long id;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
import org.elasticsearch.action.OriginalIndices;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsGroup;
import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsResponse;
import org.elasticsearch.action.search.SearchAction;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.HandledTransportAction;
import org.elasticsearch.cluster.ClusterState;
Expand Down Expand Up @@ -284,8 +285,9 @@ private void executeSearch(SearchTask task, SearchTimeProvider timeProvider, Sea
for (int i = 0; i < indices.length; i++) {
concreteIndices[i] = indices[i].getName();
}
Map<String, Long> nodeSearchCounts = searchTransportService.getPendingRequests(SearchAction.NAME);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is puzzeling to me, why are you useing the SearchAction.NAME it's the action that acts as a coordinator and we don't necessarily run this from within the cluster so the counts are expected to be 0 for almost all nodes? I wonder if that should be the shard actions here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right! I think instead it should be the prefix of "indices:data/read/search" so that all of these match it.

I'll make that change and then re-run the benchmarks to see if it affects things, it may mean we can lower b=4 back down to b=3, good catch!

GroupShardsIterator<ShardIterator> localShardsIterator = clusterService.operationRouting().searchShards(clusterState,
concreteIndices, routingMap, searchRequest.preference());
concreteIndices, routingMap, searchRequest.preference(), searchService.getResponseCollectorService(), nodeSearchCounts);
GroupShardsIterator<SearchShardIterator> shardIterators = mergeShardsIterators(localShardsIterator, localIndices,
remoteShardIterators);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,18 +29,24 @@
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.node.ResponseCollectorService;

import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Optional;
import java.util.OptionalDouble;
import java.util.Set;
import java.util.stream.Collectors;

import static java.util.Collections.emptyMap;

Expand Down Expand Up @@ -261,6 +267,142 @@ public ShardIterator activeInitializingShardsIt(int seed) {
return new PlainShardIterator(shardId, ordered);
}

/**
* Returns an iterator over active and initializing shards, ordered by the adaptive replica
* selection forumla. Making sure though that its random within the active shards of the same
* (or missing) rank, and initializing shards are the last to iterate through.
*/
public ShardIterator rankedActiveInitializingShardsIt(@Nullable ResponseCollectorService collector,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

down the road we really need to work on the number of objects being created in this process. It can totally be a followup but I think we can abstract it away quite nicely since it's all keyed by the node id and the set of nodes is static. We can use a bytesrefhash with parallel arrays in the future that also prevents all the boxing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I totally agree, it was even at the point where it was very elegantly implemented using streams, however the streams were too slow compared to their imperative counterparts, so it's definitely something I'd like to address in the future

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the other impl is called activeInitializingShardsRandomIt so maybe we should rename this one to activeInitializingShardsRankedIt or rename the other one to randomActiveInitializingShardsIt?

@Nullable Map<String, Long> nodeSearchCounts) {
final int seed = shuffler.nextSeed();
if (allInitializingShards.isEmpty()) {
return new PlainShardIterator(shardId, rank(shuffler.shuffle(activeShards, seed), collector, nodeSearchCounts));
}

ArrayList<ShardRouting> ordered = new ArrayList<>(activeShards.size() + allInitializingShards.size());
List<ShardRouting> rankedActiveShards = rank(shuffler.shuffle(activeShards, seed), collector, nodeSearchCounts);
ordered.addAll(rankedActiveShards);
List<ShardRouting> rankedInitializingShards = rank(allInitializingShards, collector, nodeSearchCounts);
ordered.addAll(rankedInitializingShards);
return new PlainShardIterator(shardId, ordered);
}

private static Set<String> getAllNodeIds(final List<ShardRouting> shards) {
final Set<String> nodeIds = new HashSet<>();
for (ShardRouting shard : shards) {
nodeIds.add(shard.currentNodeId());
}
return nodeIds;
}

private static Map<String, Optional<ResponseCollectorService.ComputedNodeStats>>
getNodeStats(final Set<String> nodeIds, final ResponseCollectorService collector) {

final Map<String, Optional<ResponseCollectorService.ComputedNodeStats>> nodeStats = new HashMap<>(nodeIds.size());
for (String nodeId : nodeIds) {
nodeStats.put(nodeId, collector.getNodeStatistics(nodeId));
}
return nodeStats;
}

private static Map<String, Double> rankNodes(final Map<String, Optional<ResponseCollectorService.ComputedNodeStats>> nodeStats,
final Map<String, Long> nodeSearchCounts) {
final Map<String, Double> nodeRanks = new HashMap<>(nodeStats.size());
for (Map.Entry<String, Optional<ResponseCollectorService.ComputedNodeStats>> entry : nodeStats.entrySet()) {
entry.getValue().ifPresent(stats -> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you unwrap the entry here too?

nodeRanks.put(entry.getKey(), stats.rank(nodeSearchCounts.getOrDefault(entry.getKey(), 1L)));
});
}
return nodeRanks;
}

private static void adjustStats(final ResponseCollectorService collector,
final Map<String, Optional<ResponseCollectorService.ComputedNodeStats>> nodeStats,
final String minNodeId,
final ResponseCollectorService.ComputedNodeStats minStats) {
if (minNodeId != null) {
for (Map.Entry<String, Optional<ResponseCollectorService.ComputedNodeStats>> entry : nodeStats.entrySet()) {
if (entry.getKey().equals(minNodeId) == false && entry.getValue().isPresent()) {
final ResponseCollectorService.ComputedNodeStats stats = entry.getValue().get();
final int updatedQueue = (minStats.queueSize + stats.queueSize) / 2;
final long updatedResponse = (long) (minStats.responseTime + stats.responseTime) / 2;
final long updatedService = (long) (minStats.serviceTime + stats.serviceTime) / 2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the casts should not be necessary?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are required, without them you get error: incompatible types: possible lossy conversion from double to long

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, I had not realized we stored those times as doubles

collector.addNodeStatistics(stats.nodeId, updatedQueue, updatedResponse, updatedService);
}
}
}
}

private static List<ShardRouting> rank(List<ShardRouting> shards, final ResponseCollectorService collector,
final Map<String, Long> nodeSearchCounts) {
if (collector == null || nodeSearchCounts == null || shards.size() <= 1) {
return shards;
}

// Retrieve which nodes we can potentially send the query to
final Set<String> nodeIds = getAllNodeIds(shards);
final int nodeCount = nodeIds.size();

final Map<String, Optional<ResponseCollectorService.ComputedNodeStats>> nodeStats = getNodeStats(nodeIds, collector);

// Retrieve all the nodes the shards exist on
final Map<String, Double> nodeRanks = rankNodes(nodeStats, nodeSearchCounts);

String minNode = null;
ResponseCollectorService.ComputedNodeStats minStats = null;
// calculate the "winning" node and its stats (for adjusting other nodes later)
for (Map.Entry<String, Optional<ResponseCollectorService.ComputedNodeStats>> entry : nodeStats.entrySet()) {
if (entry.getValue().isPresent()) {
ResponseCollectorService.ComputedNodeStats stats = entry.getValue().get();
double rank = stats.rank(nodeSearchCounts.getOrDefault(entry.getKey(), 1L));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd find it easier to read if you pulled entry.getKey and entry.getValue into their own variables with meaningful names

if (minStats == null || rank < minStats.rank(nodeSearchCounts.getOrDefault(minStats.nodeId, 1L))) {
minStats = stats;
minNode = entry.getKey();
}
}
}

// sort all shards based on the shard rank
ArrayList<ShardRouting> sortedShards = new ArrayList<>(shards);
Collections.sort(sortedShards, new NodeRankComparator(nodeRanks));

// adjust the non-winner nodes' stats so they will get a chance to receive queries
adjustStats(collector, nodeStats, minNode, minStats);

return sortedShards;
}

private static class NodeRankComparator implements Comparator<ShardRouting> {
private final Map<String, Double> nodeRanks;

NodeRankComparator(Map<String, Double> nodeRanks) {
this.nodeRanks = nodeRanks;
}

@Override
public int compare(ShardRouting s1, ShardRouting s2) {
if (s1.currentNodeId().equals(s2.currentNodeId())) {
// these shards on the the same node
return 0;
}
Double shard1rank = nodeRanks.get(s1.currentNodeId());
Double shard2rank = nodeRanks.get(s2.currentNodeId());
if (shard1rank != null && shard2rank != null) {
if (shard1rank < shard2rank) {
return -1;
} else if (shard2rank < shard1rank) {
return 1;
} else {
// Yahtzee!
return 0;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or just return shard1rank.compareTo(shardrank2)?

} else {
// One or both of the nodes don't have stats
return 0;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering that the fact that the order is not transitive could confuse sorting. For instance if you have s1 and s2 so that s1 < s2 and s3 which is null then s1 and s2 are both equal to s3 but not equal with each other. Maybe we should make nulls always compare less than non nulls so that the order is total?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's okay to keep the contract of treating situations where both nodes do not have stats as equal, I also expect it to be a very very tiny margin of requests since null stats only occurs on a brand new node with 0 prior searches

Copy link
Contributor

@jpountz jpountz Aug 30, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually javadocs of Arrays.sort that that an IllegalArgumentException may be thrown if the comparator violates the Comparator contract and the Comparator javadocs say that it must implement a total ordering so I think it's important to make nulls compare consistently less than or greater than non-null values.

Copy link
Member Author

@dakrone dakrone Aug 30, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, it's not strictly nulls (since the nodes do exists, their Optionals are just empty), but I understand what you're saying. I'll change this to make missing values compare consistently less

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doh they are nulls, I originally wrote it with Optional but it's different now, sorry!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I pushed 3d1dd2b for this

}
}

/**
* Returns true if no primaries are active or initializing for this shard
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,12 @@
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.shard.ShardNotFoundException;
import org.elasticsearch.node.ResponseCollectorService;

import java.util.ArrayList;
import java.util.Arrays;
Expand All @@ -43,13 +45,24 @@

public class OperationRouting extends AbstractComponent {

public static final Setting<Boolean> USE_ADAPTIVE_REPLICA_SELECTION_SETTING =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if this is false we should not collect any statistics in the SearchTransportService either no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should continue to collect the stats, especially since all of them are moving averages it's good to be able to turn ARS on and not have the numbers be wildly inaccurate. What do you think? I could go either way, though I think toggling the collection and and off is going to be more complex

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to keep collecting

Setting.boolSetting("cluster.routing.use_adaptive_replica_selection", false,
Setting.Property.Dynamic, Setting.Property.NodeScope);

private String[] awarenessAttributes;
private boolean useAdaptiveReplicaSelection;

public OperationRouting(Settings settings, ClusterSettings clusterSettings) {
super(settings);
this.awarenessAttributes = AwarenessAllocationDecider.CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING.get(settings);
this.useAdaptiveReplicaSelection = USE_ADAPTIVE_REPLICA_SELECTION_SETTING.get(settings);
clusterSettings.addSettingsUpdateConsumer(AwarenessAllocationDecider.CLUSTER_ROUTING_ALLOCATION_AWARENESS_ATTRIBUTE_SETTING,
this::setAwarenessAttributes);
clusterSettings.addSettingsUpdateConsumer(USE_ADAPTIVE_REPLICA_SELECTION_SETTING, this::setUseAdaptiveReplicaSelection);
}

private void setUseAdaptiveReplicaSelection(boolean useAdaptiveReplicaSelection) {
this.useAdaptiveReplicaSelection = useAdaptiveReplicaSelection;
}

private void setAwarenessAttributes(String[] awarenessAttributes) {
Expand All @@ -61,19 +74,33 @@ public ShardIterator indexShards(ClusterState clusterState, String index, String
}

public ShardIterator getShards(ClusterState clusterState, String index, String id, @Nullable String routing, @Nullable String preference) {
return preferenceActiveShardIterator(shards(clusterState, index, id, routing), clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference);
return preferenceActiveShardIterator(shards(clusterState, index, id, routing), clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference, null, null);
}

public ShardIterator getShards(ClusterState clusterState, String index, int shardId, @Nullable String preference) {
final IndexShardRoutingTable indexShard = clusterState.getRoutingTable().shardRoutingTable(index, shardId);
return preferenceActiveShardIterator(indexShard, clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference);
return preferenceActiveShardIterator(indexShard, clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference, null, null);
}

public GroupShardsIterator<ShardIterator> searchShards(ClusterState clusterState,
String[] concreteIndices,
@Nullable Map<String, Set<String>> routing,
@Nullable String preference) {
return searchShards(clusterState, concreteIndices, routing, preference, null, null);
}

public GroupShardsIterator<ShardIterator> searchShards(ClusterState clusterState, String[] concreteIndices, @Nullable Map<String, Set<String>> routing, @Nullable String preference) {

public GroupShardsIterator<ShardIterator> searchShards(ClusterState clusterState,
String[] concreteIndices,
@Nullable Map<String, Set<String>> routing,
@Nullable String preference,
@Nullable ResponseCollectorService collectorService,
@Nullable Map<String, Long> nodeCounts) {
final Set<IndexShardRoutingTable> shards = computeTargetedShards(clusterState, concreteIndices, routing);
final Set<ShardIterator> set = new HashSet<>(shards.size());
for (IndexShardRoutingTable shard : shards) {
ShardIterator iterator = preferenceActiveShardIterator(shard, clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference);
ShardIterator iterator = preferenceActiveShardIterator(shard,
clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference, collectorService, nodeCounts);
if (iterator != null) {
set.add(iterator);
}
Expand Down Expand Up @@ -107,10 +134,17 @@ private Set<IndexShardRoutingTable> computeTargetedShards(ClusterState clusterSt
return set;
}

private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable indexShard, String localNodeId, DiscoveryNodes nodes, @Nullable String preference) {
private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable indexShard, String localNodeId,
DiscoveryNodes nodes, @Nullable String preference,
@Nullable ResponseCollectorService collectorService,
@Nullable Map<String, Long> nodeCounts) {
if (preference == null || preference.isEmpty()) {
if (awarenessAttributes.length == 0) {
return indexShard.activeInitializingShardsRandomIt();
if (useAdaptiveReplicaSelection) {
return indexShard.rankedActiveInitializingShardsIt(collectorService, nodeCounts);
} else {
return indexShard.activeInitializingShardsRandomIt();
}
} else {
return indexShard.preferAttributesActiveInitializingShardsIt(awarenessAttributes, nodes);
}
Expand Down Expand Up @@ -141,7 +175,11 @@ private ShardIterator preferenceActiveShardIterator(IndexShardRoutingTable index
// no more preference
if (index == -1 || index == preference.length() - 1) {
if (awarenessAttributes.length == 0) {
return indexShard.activeInitializingShardsRandomIt();
if (useAdaptiveReplicaSelection) {
return indexShard.rankedActiveInitializingShardsIt(collectorService, nodeCounts);
} else {
return indexShard.activeInitializingShardsRandomIt();
}
} else {
return indexShard.preferAttributesActiveInitializingShardsIt(awarenessAttributes, nodes);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
import org.elasticsearch.cluster.NodeConnectionsService;
import org.elasticsearch.cluster.action.index.MappingUpdatedAction;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.routing.OperationRouting;
import org.elasticsearch.cluster.routing.allocation.DiskThresholdSettings;
import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;
import org.elasticsearch.cluster.routing.allocation.decider.AwarenessAllocationDecider;
Expand Down Expand Up @@ -402,6 +403,7 @@ public void apply(Settings value, Settings current, Settings previous) {
SearchModule.INDICES_MAX_CLAUSE_COUNT_SETTING,
ThreadPool.ESTIMATED_TIME_INTERVAL_SETTING,
FastVectorHighlighter.SETTING_TV_HIGHLIGHT_MULTI_VALUE,
Node.BREAKER_TYPE_KEY
Node.BREAKER_TYPE_KEY,
OperationRouting.USE_ADAPTIVE_REPLICA_SELECTION_SETTING
)));
}
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,6 @@ public static EsThreadPoolExecutor newFixed(String name, int size, int queueCapa
public static EsThreadPoolExecutor newAutoQueueFixed(String name, int size, int initialQueueCapacity, int minQueueSize,
int maxQueueSize, int frameSize, TimeValue targetedResponseTime,
ThreadFactory threadFactory, ThreadContext contextHolder) {
if (initialQueueCapacity == minQueueSize && initialQueueCapacity == maxQueueSize) {
return newFixed(name, size, initialQueueCapacity, threadFactory, contextHolder);
}

if (initialQueueCapacity <= 0) {
throw new IllegalArgumentException("initial queue capacity for [" + name + "] executor must be positive, got: " +
initialQueueCapacity);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,9 +79,7 @@ public final class QueueResizingEsThreadPoolExecutor extends EsThreadPoolExecuto
this.minQueueSize = minQueueSize;
this.maxQueueSize = maxQueueSize;
this.targetedResponseTimeNanos = targetedResponseTime.getNanos();
// We choose to start the EWMA with the targeted response time, reasoning that it is a
// better start point for a realistic task execution time than starting at 0
this.executionEWMA = new ExponentiallyWeightedMovingAverage(EWMA_ALPHA, targetedResponseTimeNanos);
this.executionEWMA = new ExponentiallyWeightedMovingAverage(EWMA_ALPHA, 0);
logger.debug("thread pool [{}] will adjust queue by [{}] when determining automatic queue size",
name, QUEUE_ADJUSTMENT_AMOUNT);
}
Expand Down
Loading