When you open this index for searching, how much heap do you give it?
In general, you should give IndexWriter the same heap size, since
during merge it will need to open N readers at once, and if you have
RAM resident doc values fields, those need enough heap space.
Also, the default DocValuesForm
With forceMerge(1) throwing an OOM error, we switched to
forceMergeDeletes() which worked for a while, but that is now also
running out of memory. As a result, I've turned all manner of forced
merges off.
I'm more than a little apprehensive that if the OOM error can happen as
part of a force
]
Sent: Thursday, September 26, 2013 12:26 PM
To: java-user@lucene.apache.org
Cc: Ian Lea
Subject: Re: Lucene 4.4.0 mergeSegments OutOfMemoryError
Yes, it happens as part of the early morning optimize, and yes, it's a
forceMerge(1) which I've disabled for now.
I haven't looked at
; From: Michael van Rooyen [mailto:mich...@loot.co.za]
> Sent: Thursday, September 26, 2013 12:26 PM
> To: java-user@lucene.apache.org
> Cc: Ian Lea
> Subject: Re: Lucene 4.4.0 mergeSegments OutOfMemoryError
>
> Yes, it happens as part of the early morning optimize, and yes, it
Yes, it happens as part of the early morning optimize, and yes, it's a
forceMerge(1) which I've disabled for now.
I haven't looked at the persistence mechanism for Lucene since 2.x, but
if I remember correctly, the deleted documents would stay in an index
segment until that segment was eventua
Is this OOM happening as part of your early morning optimize or at
some other point? By optimize do you mean IndexWriter.forceMerge(1)?
You really shouldn't have to use that. If the index grows forever
without it then something else is going on which you might wish to
report separately.
--
Ian.
We've recently upgraded to Lucene 4.4.0 and mergeSegments now causes an
OOM error.
As background, our index contains about 14 million documents (growing
slowly) and we process about 1 million updates per day. It's about 8GB
on disk. I'm not sure if the Lucene segments merge the way they used