[
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495578#comment-16495578
]
Erick Erickson commented on LUCENE-7976:
----------------------------------------
Iteration N+1. This one removes the horrible loop that concerned [~mikemccand],
and good riddance to it. Also puts in all the rest of the changes so far.
2 out of 2,004 iterations of TestTieredMergePolicy.testPartialMerge failed
because a forceMerge was specified with maxSegments != 1 that didn't produce
the exact number of segments specified. I changed the test a bit to accommodate
the fact that if we respect maxSegmentSize + 25% as an upper limit, then there
are certainly some situations where the expected segment count will not be
exactly what's specified. Is this acceptable? It's the packing problem.
And of course I thought that when the segment count _is_ 1 there should be no
ambiguity so that's why two patches are uploaded so close to each other.
Meanwhile I'll run another couple of thousand iterations and the whole
precommit/test cycle again.
Pending more comments I think we're close.
> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of
> very large segments
> -------------------------------------------------------------------------------------------------
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Erick Erickson
> Assignee: Erick Erickson
> Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch,
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch,
> LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on
> disk) handled quite easily in a single Lucene index. This is particularly
> true as features like docValues move data into MMapDirectory space. The
> current TMP algorithm allows on the order of 50% deleted documents as per a
> dev list conversation with Mike McCandless (and his blog here:
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many
> TB) solutions like "you need to distribute your collection over more shards"
> become very costly. Additionally, the tempting "optimize" button exacerbates
> the issue since once you form, say, a 100G segment (by
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like
> <maxAllowedPctDeletedInBigSegments> (no, that's not serious name, suggestions
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with
> >> smaller segments to bring the resulting segment up to 5G. If no smaller
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize).
> >> It would be rewritten into a single segment removing all deleted docs no
> >> matter how big it is to start. The 100G example above would be rewritten
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the
> default would be the same behavior we see now. As it stands now, though,
> there's no way to recover from an optimize/forceMerge except to re-index from
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the
> wild" with 10s of shards replicated 3 or more times. And that doesn't even
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A
> new merge policy is certainly an alternative.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]