Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document changes to forcemerge as a breaking change for 6.5 #31689

Closed
jpountz opened this issue Jun 29, 2018 · 6 comments
Closed

Document changes to forcemerge as a breaking change for 6.5 #31689

jpountz opened this issue Jun 29, 2018 · 6 comments
Labels
blocker :Distributed Indexing/Engine Anything around managing Lucene and the Translog in an open shard. >docs General docs changes stalled v6.5.0

Comments

@jpountz
Copy link
Contributor

jpountz commented Jun 29, 2018

As of Lucene 7.5 (probably Elasticsearch 6.5), the max_num_segments option will only be a best effort, and will not be honored if it would require creating segments that are larger than the maximum segment size. https://issues.apache.org/jira/browse/LUCENE-7976

Stalled on the upgrade to Lucene 7.5.

@jpountz jpountz added >docs General docs changes blocker stalled :Distributed Indexing/Engine Anything around managing Lucene and the Translog in an open shard. v6.5.0 labels Jun 29, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

@polyfractal
Copy link
Contributor

I think this is unstalled since #32390 landed?

I think the behavior is slightly different than originally intended, since #32291 also landed (e.g. we respect max_num_segments), but I think that means we may also end up with fewer segments than max_num_segments? The docs still state that you'll get exactly max_num_segments

Related: #32323

@jasontedor
Copy link
Member

Relates #36616

@danielkasen
Copy link

This leads to Force merging down to 1 segment everytime. So even if I say force merge to 20 it'll do the large operation and try to shrink it to 1.

@danielkasen
Copy link

Any update on if this is the expected behavior going forward? This is causing serious issues within our cluster due to an inability to perform force merge operations.

@jpountz
Copy link
Contributor Author

jpountz commented Feb 1, 2019

We made the decision that forcemerge should keep honoring the max_num_segments parameter in #32291.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocker :Distributed Indexing/Engine Anything around managing Lucene and the Translog in an open shard. >docs General docs changes stalled v6.5.0
Projects
None yet
Development

No branches or pull requests

5 participants