Hi;
I am a newcomer to this list and trying out Lucene for the first time. It looks
really useful and I am evaluating it for a potentially very large index that my
company might need to build.
As I was investigating using Lucene, I wanted to know what the performance of
optimize/index merg
seem to be doing anything
useful.
Lokesh
Daniel Naber <[EMAIL PROTECTED]> wrote:
On Saturday 25 June 2005 02:10, Lokesh Bajaj wrote:
> 3] Does this seem like a JVM issue? Since its always pointing to a
> native method, I am not really sure what to look for or debug.
Does you JVM
[mailto:[EMAIL PROTECTED]
Sent: Monday, June 27, 2005 10:08 AM
To: java-user@lucene.apache.org
Subject: Re: issues building a large index
Hi,
Perhaps using hprof with cpu=samples may reveal more information about
what CPU is doing. I think this is a valid use case.
Otis
--- Lokesh Bajaj
I noticed the following code that builds the "docMap" array in
SegmentMergeInfo.java for the case where some documents might be deleted from
an index:
// build array which maps document numbers around deletions
if (reader.hasDeletions()) {
int maxDoc = reader.maxDoc();
docM
Actually, you should probably not let your index grow beyond one-third the size
of your disk.
a] You start of with your original index
b] During optimize, Lucene will initially write out files in non-compound file
format.
c] Lucene will than combine the non-compound file format into the compoun