Re: Encryption

2006-05-07 Thread George Washington
Thank you all for your replies (and Otis and John for your lively sense of humour as well). I obviously accept the need to encrypt the index, but am not sure that using the Windows (or Linux) file system encryption will solve the problem. My understanding is that both cannot be under my applicat

Encryption

2006-05-05 Thread George Washington
I am using Lucene to index as well as to store complete source documents (typically few tens of thousands of documents, not millions). I would like to protect the source documents with encryption but have the following questions: Is it possible to reconstruct a complete source document from the

Re: Lucene 1.9.1 and timeToString() apparent incompatibility with 1.4.3

2006-03-08 Thread George Washington
thanks Chris, I think I'll opt for re-creating the index now, using the new 1.9.1 code. Sooner or later, it seems to me, the deprecated code will be removed anyway. Better facing the pain now than later, makes it possible for me to take advantage of the new date resolution features. Even though

Re: Lucene 1.9.1 and timeToString() apparent incompatibility with 1.4.3

2006-03-07 Thread George Washington
Thanks Chris for making it clear, I had read the comment but I had not understood that it implied incompatibility. But will the code be preserved in Lucene 2.0, in light of the comment contained in the Lucene 1.9.1 announcement ? QUOTE Applications must compile against 1.9 without deprecation w

Lucene 1.9.1 and timeToString() apparent incompatibility with 1.4.3

2006-03-07 Thread George Washington
I recently converted from Lucene 1.4.3 to 1.9.1 and in the process replaced all deprecated classes with the new ones as recommended (for forward compatibility with Lucene 2.0). This however seems to introduce an incompatibilty when the new timeToString() and stringToTime() classes are used. Us

RE: Storing large text or binary source documents in the index and memory usage

2006-01-21 Thread George Washington
thank you Daniel, but the best I get from MaxBufferedDocs(1) is an OOM error after trying 5 iterations of 10MB each in the JUnit test provided by Chris, running inside Eclipse 3.1. I had already tried with MaxBufferdDocs(2) with no success before I posted the original post. I also tried: write

RE: Storing large text or binary source documents in the index and memory usage

2006-01-21 Thread George Washington
Thank you Chris for replying and for the trouble you took to test the problem. I am looking forward to a reply form the Lucene project. Cheers From: Chris Hostetter <[EMAIL PROTECTED]> Reply-To: java-user@lucene.apache.org To: java-user@lucene.apache.org Subject: RE: Storing large text or bina

RE: Storing large text or binary source documents in the index and memory usage

2006-01-20 Thread George Washington
nswer to question one is that there is no other alternative. Cheers From: "George Washington" <[EMAIL PROTECTED]> Reply-To: java-user@lucene.apache.org To: java-user@lucene.apache.org Subject: Storing large text or binary source documents in the index and memory usage Date: F

Storing large text or binary source documents in the index and memory usage

2006-01-19 Thread George Washington
I would like to store large source documents (>10MB) in the index in their original form, i.e. as text for text documents or as byte[] for binary documents. I have no difficulty adding the source document as a field to the Lucene index document, but when I write the index document to the index I