Hi,

After finishing indexing, we tried to consolidate all segments using
forcemerge, but we continuously get out of memory error even if we
increased the memory up to 4GB.

Exception in thread "main" java.lang.IllegalStateException: this writer hit
an OutOfMemoryError; cannot complete forceMerge
    at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1664)
    at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1610)
...
Exception in thread "Lucene Merge Thread #0"
org.apache.lucene.index.MergePolicy$MergeException:
java.lang.OutOfMemoryError: Java heap space
    at
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:541)
    at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:514)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at org.apache.lucene.util.packed.Packed64.<init>(Packed64.java:92)
    at
org.apache.lucene.util.packed.PackedInts.getReaderNoHeader(PackedInts.java:845)
    at
org.apache.lucene.util.packed.MonotonicBlockPackedReader.<init>(MonotonicBlockPackedReader.java:69)
    at
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadBinary(Lucene42DocValuesProducer.java:218)
    at
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getBinary(Lucene42DocValuesProducer.java:197)
    at
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getBinary(PerFieldDocValuesFormat.java:254)
    at
org.apache.lucene.index.SegmentCoreReaders.getBinaryDocValues(SegmentCoreReaders.java:222)
    at
org.apache.lucene.index.SegmentReader.getBinaryDocValues(SegmentReader.java:241)
    at
org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:183)
    at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:126)
    at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3693)

It seems the BinaryDocValues caused the problem. Is there any way we can
constrain the memory usage for the merging of BinaryDocValues field?

Thanks.

Reply via email to