-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IllegalArgumentException: Values less than -1 bytes are not supported on DiskThresholdDecider #48380
Labels
>bug
:Distributed Coordination/Allocation
All issues relating to the decision making around placing a shard (both master logic & on the nodes)
v7.4.0
Comments
ywelsch
added
the
:Distributed Coordination/Allocation
All issues relating to the decision making around placing a shard (both master logic & on the nodes)
label
Oct 23, 2019
Pinging @elastic/es-distributed (:Distributed/Allocation) |
DaveCTurner
added a commit
to DaveCTurner/elasticsearch
that referenced
this issue
Oct 23, 2019
Today it is possible that the total size of all relocating shards exceeds the total amount of free disk space. For instance, this may be caused by another user of the same disk increasing their disk usage, or may be due to how Elasticsearch double-counts relocations that are nearly complete particularly if there are many concurrent relocations in progress. The `DiskThresholdDecider` treats negative free space similarly to zero free space, but it then fails when rendering the messages that explain its decision. This commit fixes its handling of negative free space. Fixes elastic#48380
DaveCTurner
added a commit
that referenced
this issue
Oct 23, 2019
Today it is possible that the total size of all relocating shards exceeds the total amount of free disk space. For instance, this may be caused by another user of the same disk increasing their disk usage, or may be due to how Elasticsearch double-counts relocations that are nearly complete particularly if there are many concurrent relocations in progress. The `DiskThresholdDecider` treats negative free space similarly to zero free space, but it then fails when rendering the messages that explain its decision. This commit fixes its handling of negative free space. Fixes #48380
DaveCTurner
added a commit
that referenced
this issue
Oct 23, 2019
Today it is possible that the total size of all relocating shards exceeds the total amount of free disk space. For instance, this may be caused by another user of the same disk increasing their disk usage, or may be due to how Elasticsearch double-counts relocations that are nearly complete particularly if there are many concurrent relocations in progress. The `DiskThresholdDecider` treats negative free space similarly to zero free space, but it then fails when rendering the messages that explain its decision. This commit fixes its handling of negative free space. Fixes #48380
DaveCTurner
added a commit
that referenced
this issue
Oct 23, 2019
Today it is possible that the total size of all relocating shards exceeds the total amount of free disk space. For instance, this may be caused by another user of the same disk increasing their disk usage, or may be due to how Elasticsearch double-counts relocations that are nearly complete particularly if there are many concurrent relocations in progress. The `DiskThresholdDecider` treats negative free space similarly to zero free space, but it then fails when rendering the messages that explain its decision. This commit fixes its handling of negative free space. Fixes #48380
DaveCTurner
added a commit
that referenced
this issue
Oct 23, 2019
Today it is possible that the total size of all relocating shards exceeds the total amount of free disk space. For instance, this may be caused by another user of the same disk increasing their disk usage, or may be due to how Elasticsearch double-counts relocations that are nearly complete particularly if there are many concurrent relocations in progress. The `DiskThresholdDecider` treats negative free space similarly to zero free space, but it then fails when rendering the messages that explain its decision. This commit fixes its handling of negative free space. Fixes #48380
This was referenced Feb 3, 2020
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
>bug
:Distributed Coordination/Allocation
All issues relating to the decision making around placing a shard (both master logic & on the nodes)
v7.4.0
Elasticsearch version (
bin/elasticsearch --version
): 7.4.0Plugins installed: [repository-s3]
JVM version (
java -version
):OS version (
uname -a
if on a Unix-like system):running on ESS
Description of the problem including expected versus actual behavior:
Suddenly during a plan migration, plans started to fail due to the exception:
instance-0000000104
is the master nodeSteps to reproduce:
Sorry, it's the first and only time I have seen this, so I have no steps.
Provide logs (if relevant):
The exception are up there.
Additionally, after @ywelsch suggestion, I enabled the DEBUG level on
org.elasticsearch.cluster.routing.allocation.decider
which revealed:I identified that node using the _cluster/state and restarted it and the problem seems to be gone.
The text was updated successfully, but these errors were encountered: