.
From: Michael McCandless <[EMAIL PROTECTED]>
Reply-To: java-user@lucene.apache.org
To: java-user@lucene.apache.org
Subject: Re: Field compression too slow
Date: Fri, 11 Aug 2006 06:59:58 -0400
I can share the data.. but it would be quicker for you to just
pull out some
random tex
Mike, which version of Lucene supports lazy loading? Thanks.
From: Michael McCandless <[EMAIL PROTECTED]>
Reply-To: java-user@lucene.apache.org
To: java-user@lucene.apache.org
Subject: Re: Field compression too slow
Date: Fri, 11 Aug 2006 06:59:58 -0400
I can share the data.. but it wo
I can share the data.. but it would be quicker for you to just pull out
some
random text from anywhere you like.
OK, I hear you. I'll pull together some test data ... thanks.
Also.. upon reflection I'm not certain using compression inside the
index is
really a valuable process without laz
I can share the data.. but it would be quicker for you to just pull out some
random text from anywhere you like.
The issue is that the text was in an email, which was one of about 2,000 and
I don't know which one. I got the 4.5MB figure from the number of bytes in
the byte array reported in the
I have a sample document which has about 4.5MB of text to be stored as
compressed data within the field, and the indexing of this document
seems to
take an inordinate amount of time (over 10 minutes!). When debugging I can
see that it's stuck on the deflate() calls of the Deflater used by Luc
I have "assumed" I can't have two threads writing to the index
concurrently,
so have implemented my own read/write locking system. Are you saying I
don't need to bother with this? My reading of the doco suggests that you
shouldn't have two IndexWriters open on the same index.
I know that if I t
Thanks for the Jira issue...
one question on your synchronization comment...
I have "assumed" I can't have two threads writing to the index concurrently,
so have implemented my own read/write locking system. Are you saying I
don't need to bother with this? My reading of the doco suggests that y
I'm not sure if it would help my particular situation, but is there
any way
to provide the option of specifying the compression level? The level
used
by Lucene (level 9) is the maximum possible compression level. Ideally I
would like to be able to alter the compression level on the basis of
I'm not sure if it would help my particular situation, but is there any way
to provide the option of specifying the compression level? The level used
by Lucene (level 9) is the maximum possible compression level. Ideally I
would like to be able to alter the compression level on the basis of th
Hello all,
I am experiencing some performance problems indexing large(ish) amounts of
text using the IndexField.Store.COMPRESS option when creating a Field in
Lucene.
I have a sample document which has about 4.5MB of text to be stored as
compressed data within the field, and the indexing of this
10 matches
Mail list logo