RE: Storing large text or binary source documents in the index and memory usage

2006-01-21 Thread George Washington
thank you Daniel, but the best I get from MaxBufferedDocs(1) is an OOM error after trying 5 iterations of 10MB each in the JUnit test provided by Chris, running inside Eclipse 3.1. I had already tried with MaxBufferdDocs(2) with no success before I posted the original post. I also tried: write

RE: Storing large text or binary source documents in the index and memory usage

2006-01-21 Thread George Washington
ext or binary source documents in the index and memory usage Date: Fri, 20 Jan 2006 18:35:41 -0800 (PST) : otherwise I would have done so already. My real question is question number : one, which did not receive a reply, is there a formula that can tell me if : what is happening is reasonable and

RE: Storing large text or binary source documents in the index and memory usage

2006-01-20 Thread Chris Hostetter
: otherwise I would have done so already. My real question is question number : one, which did not receive a reply, is there a formula that can tell me if : what is happening is reasonable and to be expected, or am I doing something I've never played with the binary fields much, nor have i ever t

RE: Storing large text or binary source documents in the index and memory usage

2006-01-20 Thread George Washington
nswer to question one is that there is no other alternative. Cheers From: "George Washington" <[EMAIL PROTECTED]> Reply-To: java-user@lucene.apache.org To: java-user@lucene.apache.org Subject: Storing large text or binary source documents in the index and memory usage Date: F

RE: Storing large text or binary source documents in the index and memory usage

2006-01-20 Thread John Powers
@lucene.apache.org Subject: Storing large text or binary source documents in the index and memory usage I would like to store large source documents (>10MB) in the index in their original form, i.e. as text for text documents or as byte[] for binary documents. I have no difficulty adding the sou

Storing large text or binary source documents in the index and memory usage

2006-01-19 Thread George Washington
I would like to store large source documents (>10MB) in the index in their original form, i.e. as text for text documents or as byte[] for binary documents. I have no difficulty adding the source document as a field to the Lucene index document, but when I write the index document to the index I