Hi,
I'm having problems with the Lucene optimization. Two of the indexes are
about 2BG big and every day about 30 documents are added to each of these
indexes. At the end of the indexing the IndexWriter optimize() method is
executed and it takes about 30 minutes to finish the optimization for each
Probably your Unix system has a different default encoding than your Windows
machine.
You have to make sure you give the IndexWriter a string that has the correct
encoding.
Do you specifically set the encoding in you code before you index it with
Lucene?
Ross
-Original Message-
From: gau
I would like to bring that issue up again as I haven't resolved it yet and
haven't found what's causing it.
Any help, ideas or sharing experience are welcome!
Thanks,
Ross
-Original Message-
From: Angelov, Rossen
Sent: Friday, May 27, 2005 10:42 AM
To: 'java-us
he entire index to a new index, so it will take how ever long it takes
to copy 2 GB's on your hardware + a small amount of overhead...
Dan
Angelov, Rossen wrote:
>I would like to bring that issue up again as I haven't resolved it yet and
>haven't found what's causing it
le to get a lock on the index to add the
documents.
Dan
Angelov, Rossen wrote:
>Thanks for the suggestion, Jian Chen's idea is very similar too.
>Probably optimizing that often is not necessary and not that critical for
>speeding up the searches.
>
>I'll try changing the
When I'm using the QueryParser directly, the proximity search works fine and
getPhraseSlop() returns the correct slop int.
The problem is when I extend QueryParser. When extending it, getPhraseSlop
always returns the default value - 0. It's like setPhraseSlop is never
called.
Does anybody know if
s with QueryParser then I
don't know what could be wrong at this point.
Erik
On Jun 27, 2005, at 5:06 PM, Angelov, Rossen wrote:
> When I'm using the QueryParser directly, the proximity search works
> fine and
> getPhraseSlop() returns the correct slop int.
>
&