-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI] CorruptedFileIT.testReplicaCorruption failure #41899
Labels
:Distributed Indexing/Recovery
Anything around constructing a new shard, either from a local or a remote source.
>test-failure
Triaged test failures from CI
Comments
matriv
added
>test-failure
Triaged test failures from CI
:Distributed Indexing/Recovery
Anything around constructing a new shard, either from a local or a remote source.
labels
May 7, 2019
Pinging @elastic/es-distributed |
Logs are gone and no recent failure of this. Closing this waiting for next failure. |
This has failed again, here are the logs: https://gradle-enterprise.elastic.co/s/i4zmfbd3xj4vg |
dnhatn
added a commit
that referenced
this issue
Sep 25, 2019
We can have a large number of shard copies in this test. For example, the two recent failures have 24 and 27 copies respectively and all replicas have to copy segment files as their stores are corrupted. Our CI needs more than 30 seconds to start all these copies. Note that in two recent failures, the cluster was green just after the cluster health timed out. Closes #41899
dnhatn
added a commit
that referenced
this issue
Sep 25, 2019
We can have a large number of shard copies in this test. For example, the two recent failures have 24 and 27 copies respectively and all replicas have to copy segment files as their stores are corrupted. Our CI needs more than 30 seconds to start all these copies. Note that in two recent failures, the cluster was green just after the cluster health timed out. Closes #41899
dnhatn
added a commit
that referenced
this issue
Sep 25, 2019
We can have a large number of shard copies in this test. For example, the two recent failures have 24 and 27 copies respectively and all replicas have to copy segment files as their stores are corrupted. Our CI needs more than 30 seconds to start all these copies. Note that in two recent failures, the cluster was green just after the cluster health timed out. Closes #41899
dnhatn
added a commit
that referenced
this issue
Sep 25, 2019
We can have a large number of shard copies in this test. For example, the two recent failures have 24 and 27 copies respectively and all replicas have to copy segment files as their stores are corrupted. Our CI needs more than 30 seconds to start all these copies. Note that in two recent failures, the cluster was green just after the cluster health timed out. Closes #41899
dnhatn
added a commit
that referenced
this issue
Sep 26, 2019
We can have a large number of shard copies in this test. For example, the two recent failures have 24 and 27 copies respectively and all replicas have to copy segment files as their stores are corrupted. Our CI needs more than 30 seconds to start all these copies. Note that in two recent failures, the cluster was green just after the cluster health timed out. Closes #41899
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Distributed Indexing/Recovery
Anything around constructing a new shard, either from a local or a remote source.
>test-failure
Triaged test failures from CI
Failed for 7.0: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+7.0+intake/859/console
Reproduction line
(does not) reproduce locally
Example relevant log:
Frequency
Not often, last occurrence was April 3rd 2019.
The text was updated successfully, but these errors were encountered: