sn't failing on a specific document the most
>> likely cause is that your program is hanging on to something it
>> shouldn't. Previous docs? File handles? Lucene readers/searchers?
>>
>>
>> --
>> Ian.
>>
>>
>> On Wed, Mar 3, 2010 at
AM, Ian Lea wrote:
>> Have you run it through a memory profiler yet? Seems the obvious next
>> step.
>>
>> If that doesn't help, cut it down to the simplest possible
>> self-contained program that demonstrates the problem and post it here.
>>
>>
&
o produce OOMs.
>
> FWIW
> Erick
>
> On Wed, Mar 3, 2010 at 1:09 PM, ajay_gupta wrote:
>
>>
>> Mike,
>> Actually my documents are very small in size. We have csv files where
>> each
>> record represents a document which is not very large so I don
e.
>>
>> http://stackoverflow.com/questions/1362460/why-does-lucene-cause-oom-whe
>> n-indexing-large-files
>>
>> Paul
>>
>>
>> -Original Message-
>> From: java-user-return-45254-paul.b.murdoch=saic@lucene.apache.org
>> [mailto
ticularly large, or you or lucene
> (unlikely) are holding on to something that you shouldn't be.
>
>
> --
> Ian.
>
>
> On Tue, Mar 2, 2010 at 1:48 PM, ajay_gupta wrote:
>>
>> Hi Erick,
>> I tried setting setRAMBufferSizeMB as 200-500MB as well
; IndexWriter.setRAMBufferSizeMB?
>
> HTH
> Erick
>
> On Tue, Mar 2, 2010 at 8:27 AM, ajay_gupta wrote:
>
>>
>> Hi,
>> It might be general question though but I couldn't find the answer yet. I
>> have around 90k documents sizing around 350 MB. Each
Hi,
It might be general question though but I couldn't find the answer yet. I
have around 90k documents sizing around 350 MB. Each document contains a
record which has some text content. For each word in this text I want to
store context for that word and index it so I am reading each document and