Strange. This happens after I added all documents using
IndexWriter.addDocument() function. Everything works well at that point.

I then call IndexWriter.forceMerge(1), finally IndexWriter.close(true).

The "out of memory" problem happens after I called forceMerge(1) but before
close(true).

If merging does not use much memory, it seems the error message should not
show the stack trace somewhere within

org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadBinary(Lucene42DocValuesProducer.java:218)

I understand I can use "Disk" during query time, but how I can explicitly
use "Disk" during indexing/merging?

On Thu, Apr 11, 2013 at 2:39 PM, Robert Muir <rcm...@gmail.com> wrote:

> merging binarydocvalues doesn't use any RAM, it streams the values from the
> segments its merging directly to the newly written segment.
>
> So if you have this problem, its unrelated to merging: it means you don't
> have enough RAM to support all the stuff you are putting in these
> binarydocvalues fields with an in-RAM implementation. I'd use "Disk" for
> this instead.
>
> On Thu, Apr 11, 2013 at 12:57 PM, Wei Wang <welshw...@gmail.com> wrote:
>
> > Hi,
> >
> > After finishing indexing, we tried to consolidate all segments using
> > forcemerge, but we continuously get out of memory error even if we
> > increased the memory up to 4GB.
> >
> > Exception in thread "main" java.lang.IllegalStateException: this writer
> hit
> > an OutOfMemoryError; cannot complete forceMerge
> >     at
> > org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1664)
> >     at
> > org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1610)
> > ...
> > Exception in thread "Lucene Merge Thread #0"
> > org.apache.lucene.index.MergePolicy$MergeException:
> > java.lang.OutOfMemoryError: Java heap space
> >     at
> >
> >
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:541)
> >     at
> >
> >
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:514)
> > Caused by: java.lang.OutOfMemoryError: Java heap space
> >     at org.apache.lucene.util.packed.Packed64.<init>(Packed64.java:92)
> >     at
> >
> >
> org.apache.lucene.util.packed.PackedInts.getReaderNoHeader(PackedInts.java:845)
> >     at
> >
> >
> org.apache.lucene.util.packed.MonotonicBlockPackedReader.<init>(MonotonicBlockPackedReader.java:69)
> >     at
> >
> >
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadBinary(Lucene42DocValuesProducer.java:218)
> >     at
> >
> >
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getBinary(Lucene42DocValuesProducer.java:197)
> >     at
> >
> >
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getBinary(PerFieldDocValuesFormat.java:254)
> >     at
> >
> >
> org.apache.lucene.index.SegmentCoreReaders.getBinaryDocValues(SegmentCoreReaders.java:222)
> >     at
> >
> >
> org.apache.lucene.index.SegmentReader.getBinaryDocValues(SegmentReader.java:241)
> >     at
> >
> >
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:183)
> >     at
> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:126)
> >     at
> > org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3693)
> >
> > It seems the BinaryDocValues caused the problem. Is there any way we can
> > constrain the memory usage for the merging of BinaryDocValues field?
> >
> > Thanks.
> >
>

Reply via email to