actor or does Document size have no bearing on the
RAM requirement due to mergeFactor?
-Original Message-
From: karl wettin [mailto:[EMAIL PROTECTED]
Sent: 06 June 2006 10:48
To: java-user@lucene.apache.org
Subject: RE: Avoiding java.lang.OutOfMemoryError in an unstored field
On Tue, 2006-
On Tue, 2006-06-06 at 10:43 +0100, Rob Staveley (Tom) wrote:
> You are right there are going to be a lot of tokens. The entire boxy of a
> text document is getting indexed in an unstored field, but I don't see how I
> can flush a partially loaded field.
Check these out:
http://lucene.apache.org/
java-user@lucene.apache.org
Subject: RE: Avoiding java.lang.OutOfMemoryError in an unstored field
On Tue, 2006-06-06 at 10:22 +0100, Rob Staveley (Tom) wrote:
>
> Thanks for the response, Karl. I am using FSDirectory.
> -X:AggressiveHeap might reduce the number of times I get bitten by the
?
-Original Message-
From: karl wettin [mailto:[EMAIL PROTECTED]
Sent: 06 June 2006 10:16
To: java-user@lucene.apache.org
Subject: Re: Avoiding java.lang.OutOfMemoryError in an unstored field
On Tue, 2006-06-06 at 10:11 +0100, Rob Staveley (Tom) wrote:
> Sometimes I need to index large docume
On Tue, 2006-06-06 at 10:22 +0100, Rob Staveley (Tom) wrote:
>
> Thanks for the response, Karl. I am using FSDirectory.
> -X:AggressiveHeap might reduce the number of times I get bitten by the
> problem, but I'm really looking for a streaming/serialised approach [I
> think!], which allows me to ha
ive in RAM, because
that puts a limit on document size.
-Original Message-
From: karl wettin [mailto:[EMAIL PROTECTED]
Sent: 06 June 2006 10:13
To: java-user@lucene.apache.org
Subject: Re: Avoiding java.lang.OutOfMemoryError in an unstored field
On Tue, 2006-06-06 at 10:11 +0100, Rob Sta
On Tue, 2006-06-06 at 10:11 +0100, Rob Staveley (Tom) wrote:
> Sometimes I need to index large documents. I've got just about as much
> heap
> as my application is allowed (-Xmx512m) and I'm using the unstored
> org.apache.lucene.document.Field constructed with a java.io.Reader,
> but I'm
> still s
On Tue, 2006-06-06 at 10:11 +0100, Rob Staveley (Tom) wrote:
> Sometimes I need to index large documents. I've got just about as much heap
> as my application is allowed (-Xmx512m) and I'm using the unstored
> org.apache.lucene.document.Field constructed with a java.io.Reader, but I'm
> still suffe
Sometimes I need to index large documents. I've got just about as much heap
as my application is allowed (-Xmx512m) and I'm using the unstored
org.apache.lucene.document.Field constructed with a java.io.Reader, but I'm
still suffering from java.lang.OutOfMemoryError when I index some large
document