-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rolling-upgrade RecoveryIT tests are broken #35597
Comments
It is perhaps notable that all of the failures I can see have been in the two-thirds upgraded cluster or the fully-upgraded cluster. We've also seen cases where the cluster does form but then times out in a subsequent health check:
(from https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/186/console) Unfortunately the first few failures in the sequence of broken builds on 6.x seem unrelated to this one, with more changes having been pushed on top of the red CI, so it's hard to narrow down the point at which things started to break. Looking at the changes, I wonder if perhaps #34902 or #35332 could have this sort of effect? Anyhow, muted in 188de54. |
@DaveCTurner Sorry to hear that, this is concerning. I'm looking at this. |
Also muted the test in master in f9aff7b |
Thanks @DaveCTurner, your preliminary investigation helped a lot. I reverted #35332 from master in d3d7c01 and 6.x in c70b8ac and unmuted the test. |
* master: (59 commits) SQL: Move internals from Joda to java.time (elastic#35649) Add HLRC docs for Get Lifecycle Policy (elastic#35612) Align RolloverStep's name with other step names (elastic#35655) Watcher: Use joda method to get local TZ (elastic#35608) Fix line length for org.elasticsearch.action.* files (elastic#35607) Remove use of AbstractComponent in server (elastic#35444) Deprecate types in count and msearch. (elastic#35421) Refactor an ambigious TermVectorsRequest constructor. (elastic#35614) [Scripting] Use Number as a return value for BucketAggregationScript (elastic#35653) Removes AbstractComponent from several classes (elastic#35566) [DOCS] Add beta warning to ILM pages. (elastic#35571) Deprecate types in validate query requests. (elastic#35575) Unmute BuildExamplePluginsIT Revert "AwaitsFix the RecoveryIT suite - see elastic#35597" Revert "[RCI] Check blocks while having index shard permit in TransportReplicationAction (elastic#35332)" Remove remaining line length violations for o.e.action.admin.cluster (elastic#35156) ML: Adjusing BWC version post backport to 6.6 (elastic#35605) [TEST] Replace fields in response with actual values Remove usages of CharSequence in Sets (elastic#35501) AwaitsFix the RecoveryIT suite - see elastic#35597 ...
For a better explanation of the issue, see #35695 (comment) |
After #35332 has been merged, we noticed some test failures like #35597 in which one or more replica shards failed to be promoted as primaries because the primary replica re-synchronization never succeed. After some digging it appeared that the execution of the resync action was blocked because of the presence of a global cluster block in the cluster state (in this case, the "no master" block), making the resync action to fail when executed on the primary. Until #35332 such failures never happened because the TransportResyncReplicationAction is skipping the reroute phase, the only place where blocks were checked. Now with #35332 blocks are checked during reroute and also during the execution of the transport replication action on the primary. After some internal discussion, we decided that the TransportResyncReplicationAction should never be blocked. This action is part of the replica to primary promotion and makes sure that replicas are in sync and should not be blocked when the cluster state has no master or when the index is read only. This commit changes the TransportResyncReplicationAction to make obvious that it does not honor blocks. It also adds a simple test that fails if the resync action is blocked during the primary action execution. Closes #35597
After #35332 has been merged, we noticed some test failures like #35597 in which one or more replica shards failed to be promoted as primaries because the primary replica re-synchronization never succeed. After some digging it appeared that the execution of the resync action was blocked because of the presence of a global cluster block in the cluster state (in this case, the "no master" block), making the resync action to fail when executed on the primary. Until #35332 such failures never happened because the TransportResyncReplicationAction is skipping the reroute phase, the only place where blocks were checked. Now with #35332 blocks are checked during reroute and also during the execution of the transport replication action on the primary. After some internal discussion, we decided that the TransportResyncReplicationAction should never be blocked. This action is part of the replica to primary promotion and makes sure that replicas are in sync and should not be blocked when the cluster state has no master or when the index is read only. This commit changes the TransportResyncReplicationAction to make obvious that it does not honor blocks. It also adds a simple test that fails if the resync action is blocked during the primary action execution. Closes #35597
The 6.x intake builds have been broken for most of today, with clusters failing to pass their wait conditions properly:
This is from https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/197/console although there are lots of other similar failures.
There's nothing obvious to me in the node logs themselves - the 3-node cluster looks to have formed fine. It does seem to reproduce locally, and @original-brownbear and I have been looking at this for a few hours without too much success. I will mute this test suite.
The text was updated successfully, but these errors were encountered: