-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initialize primary term for shrunk indices #25307
Conversation
Today when an index is shrunk, the primary terms for its shards start from one. Yet, this is a problem as the index will already contain assigned sequence numbers across primary terms. To ensure document-level sequence number semantics, the primary terms of the target shards must start from the maximum of all the shards in the source index. This commit causes this to be the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this look great. My only ask is to add a unit test to MetaDataCreateIndexServiceTests
tmpImdBuilder.settings(actualIndexSettings); | ||
|
||
if (shrinkFromIndex != null) { | ||
final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please add a comment explaining why we do this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed a comment.
ensureGreen(); | ||
|
||
// restart random data nodes to force the primary term for some shards to increase | ||
for (int i = 0; i < randomIntBetween(0, 16); i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need up to 16 restarts? Maybe it's faster to fail shards by getting IndexShard instances from internalCluster? also this if loop calls randomIntBetween(0, 16)
many times... I'm not sure that's what you intended
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I pushed a change that does this.
@bleskes Actually this was my first approach before opting for the integration test in |
Back to you @bleskes. |
fair enough |
Today when an index is shrunk, the primary terms for its shards start from one. Yet, this is a problem as the index will already contain assigned sequence numbers across primary terms. To ensure document-level sequence number semantics, the primary terms of the target shards must start from the maximum of all the shards in the source index. This commit causes this to be the case.
Relates #10708