Erick,
this a web application running 24 hours a day thus caching cannot be the
reason. I get the same result after I re-start the same search.
Zsolt
Erick Erickson wrote:
Well, if you're seeing it, it's possible
But the first question is always "what were you measu
Hi,
on 99470 documents (I mean Lucene documents) a FuzzyQuery needs approx
30 seconds but PrefixQuery less than one.
All Lucene files need 65MB together.
I'm bit surprised of that. Is that possible?
Zsolt
Zsolt Koppany
Phone: +49-711-6740
Hello
>From the API:
"public class StandardAnalyzer
extends Analyzer
Filters StandardTokenizer with StandardFilter, LowerCaseFilter and
StopFilter, using a list of English stop words."
Are you sure that these filters won't filter your Khmer characters out?
Best,
czinkos
On Wed, Jan 24, 20
Actually I get the same result with CJKAnalyzer like with StandardAnalyzer.
Zsolt
>-Original Message-
>From: Ray Tsang [mailto:[EMAIL PROTECTED]
>Sent: Sunday, January 29, 2006 10:26 AM
>To: java-user@lucene.apache.org
>Subject: Re: Chinese support
>
>Zsolt,
>
&
And where can I find it?
Zsolt
>-Original Message-
>From: Ray Tsang [mailto:[EMAIL PROTECTED]
>Sent: Sunday, January 29, 2006 2:14 AM
>To: java-user@lucene.apache.org
>Subject: Re: Chinese support
>
>Hi Zsolt,
>
>you can try to use a Chinese analyzer.
>
>
Hi,
We use lucene without any problems even for German text bit with Chinese
text nothing is found. What is the best way to index and search Chinese
text?
Zsolt
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional