[
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513252#comment-16513252
]
Lucene/Solr QA commented on LUCENE-7976:
----------------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 49s{color}
| {color:red} lucene_core generated 3 new + 1 unchanged - 0 fixed = 4 total
(was 1) {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} |
{color:green} 2m 27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} |
{color:green} 1m 49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} |
{color:green} 1m 49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} |
{color:green} 1m 49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 43m
49s{color} | {color:green} core in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 50s{color}
| {color:red} core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 31s{color}
| {color:red} solrj in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 29s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.rest.TestManagedResourceStorage |
| | solr.cloud.autoscaling.SearchRateTriggerIntegrationTest |
| | solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest |
| | solr.client.solrj.impl.CloudSolrClientTest |
| | solr.common.util.TestJsonRecordReader |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-7976 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12927680/LUCENE-7976.patch |
| Optional Tests | compile javac unit ratsources checkforbiddenapis
validatesourcepatterns validaterefguide |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality |
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
|
| git revision | master / 2db2fb3 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| javac |
https://builds.apache.org/job/PreCommit-LUCENE-Build/33/artifact/out/diff-compile-javac-lucene_core.txt
|
| unit |
https://builds.apache.org/job/PreCommit-LUCENE-Build/33/artifact/out/patch-unit-solr_core.txt
|
| unit |
https://builds.apache.org/job/PreCommit-LUCENE-Build/33/artifact/out/patch-unit-solr_solrj.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-LUCENE-Build/33/testReport/ |
| modules | C: lucene/core solr/core solr/solrj solr/solr-ref-guide solr/webapp
U: . |
| Console output |
https://builds.apache.org/job/PreCommit-LUCENE-Build/33/console |
| Powered by | Apache Yetus 0.7.0 http://yetus.apache.org |
This message was automatically generated.
> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of
> very large segments
> -------------------------------------------------------------------------------------------------
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Erick Erickson
> Assignee: Erick Erickson
> Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch,
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch,
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch,
> LUCENE-7976.patch, SOLR-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on
> disk) handled quite easily in a single Lucene index. This is particularly
> true as features like docValues move data into MMapDirectory space. The
> current TMP algorithm allows on the order of 50% deleted documents as per a
> dev list conversation with Mike McCandless (and his blog here:
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many
> TB) solutions like "you need to distribute your collection over more shards"
> become very costly. Additionally, the tempting "optimize" button exacerbates
> the issue since once you form, say, a 100G segment (by
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like
> <maxAllowedPctDeletedInBigSegments> (no, that's not serious name, suggestions
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with
> >> smaller segments to bring the resulting segment up to 5G. If no smaller
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize).
> >> It would be rewritten into a single segment removing all deleted docs no
> >> matter how big it is to start. The 100G example above would be rewritten
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the
> default would be the same behavior we see now. As it stands now, though,
> there's no way to recover from an optimize/forceMerge except to re-index from
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the
> wild" with 10s of shards replicated 3 or more times. And that doesn't even
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A
> new merge policy is certainly an alternative.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]