Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initialize primary term for shrunk indices #25307

Merged
merged 4 commits into from
Jun 20, 2017

Conversation

jasontedor
Copy link
Member

Today when an index is shrunk, the primary terms for its shards start from one. Yet, this is a problem as the index will already contain assigned sequence numbers across primary terms. To ensure document-level sequence number semantics, the primary terms of the target shards must start from the maximum of all the shards in the source index. This commit causes this to be the case.

Relates #10708

Today when an index is shrunk, the primary terms for its shards start
from one. Yet, this is a problem as the index will already contain
assigned sequence numbers across primary terms. To ensure document-level
sequence number semantics, the primary terms of the target shards must
start from the maximum of all the shards in the source index. This
commit causes this to be the case.
Copy link
Contributor

@bleskes bleskes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this look great. My only ask is to add a unit test to MetaDataCreateIndexServiceTests

tmpImdBuilder.settings(actualIndexSettings);

if (shrinkFromIndex != null) {
final IndexMetaData sourceMetaData = currentState.metaData().getIndexSafe(shrinkFromIndex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please add a comment explaining why we do this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed a comment.

ensureGreen();

// restart random data nodes to force the primary term for some shards to increase
for (int i = 0; i < randomIntBetween(0, 16); i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need up to 16 restarts? Maybe it's faster to fail shards by getting IndexShard instances from internalCluster? also this if loop calls randomIntBetween(0, 16) many times... I'm not sure that's what you intended

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I pushed a change that does this.

@jasontedor
Copy link
Member Author

@bleskes Actually this was my first approach before opting for the integration test in ShrinkIndexIT but it's not easy to test anything meaningful there above what is already tested from my addition to ShrinkIndexIT without constructing the entire world/refactoring and I don't think it's worth it. I think what I have in ShrinkIndexIT is good enough?

@jasontedor
Copy link
Member Author

Back to you @bleskes.

@bleskes
Copy link
Contributor

bleskes commented Jun 20, 2017

it's not easy to test anything meaningful there above what is already tested from my addition to ShrinkIndexIT without constructing the entire world/refactoring

fair enough

@jasontedor jasontedor merged commit 1f14d04 into elastic:master Jun 20, 2017
@jasontedor jasontedor deleted the shrink-primary-term branch June 20, 2017 19:12
@clintongormley clintongormley added :Distributed Indexing/Engine Anything around managing Lucene and the Translog in an open shard. and removed :Sequence IDs labels Feb 14, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Indexing/Engine Anything around managing Lucene and the Translog in an open shard. >feature v6.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants