I would like to create a scorer that applies a score based on a value that is
calculated during a query. More specifically, to apply a score based on
geographical distance from a latitude,longitude.
What is the easiest way to go about doing this? The LocalLucene contrib
uses a SortComparatorSou
This is very strange.
On that machine if you make a tiny standalone test that eg calls:
(new File(directory, name)).exists()
does it work properly?
Mike
On Mon, Aug 31, 2009 at 11:39 AM, Uwe
Goetzke wrote:
> We have an IndexWriter.optimize running on 4 Proc Xenon Java 1.5 Win2003
> machine.
Ups, sorry 2.4.1
Thx
Uwe Goetzke
-Ursprüngliche Nachricht-
Von: Uwe Schindler [mailto:u...@thetaphi.de]
Gesendet: Montag, 31. August 2009 17:42
An: java-user@lucene.apache.org
Betreff: RE: MergePolicy$MergeException because of FileNotFoundException
because wrong path to index-file
Wh
Which Lucene Version? The RC2 of 2.9?
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Uwe Goetzke [mailto:uwe.goet...@healy-hudson.com]
> Sent: Monday, August 31, 2009 5:40 PM
> To: java-user@lucene.apach
Thanks for the reply.
I suspected that was the case, I was just wondering if there was something more
to it.
- Original Message
> From: Shai Erera
> To: java-user@lucene.apache.org
> Sent: Monday, August 31, 2009 10:28:41 AM
> Subject: Re: Why perform optimization in 'off hours'?
>
>
We have an IndexWriter.optimize running on 4 Proc Xenon Java 1.5 Win2003
machine.
We get a repeatable FileNotFoundException because the path to the file
is wrong:
D:\data0\impact\ordering\prod\work\search_index\s_index1251456210140_0.c
fs
Instead of
D:\data0\impact\ordering\prod\work\search_index\
When you run optimize(), you consume CPU and do lots of IO operations which
can really mess up the OS IO cache. Optimize is a very heavy process and
therefore is recommended to run at off hours. Sometimes, when your index is
large enough, it's recommended to run it during weekends, since the
optimi
Hi All,
I am new to Lucene and I was reading 'Lucene in Action' this weekend.
The book recommends that optimization be performed when the index is not in use.
The book makes it clear that optimization *may* be performed while indexing but
it says that optimizing while indexing makes indexing slow
Hi,
I'm working with Lucene 2.4.0 and the JVM (JDK 1.6.0_07). I'm
consistently receiving "OutOfMemoryError: Java heap space", when trying
to index large text files.
Example 1: Indexing a 5 MB text file runs out of memory with a 16 MB
max. heap size. So I increased the max. heap size to 51
What happens is when you index using the analyzer, attent gets indexed
(assuming you are using the same analyzer while indexing). When you search
for attent*, the query formed is for attent.
when you search for attenti*, it would look for all documents that contain
attenti* (which would not be pres
Hi Anusham
I could understand that the Analyzer stems the word to base word and stores in
DB. Users may not know this internals and they tend to give "attenti*".
My question is queryparser should stem the word and then apply /expand the wild
card query. Is this a bug with queryparser? OR Analy
Hi Ganesh,
Its the snowball analyzer that uses English Stemmer which is causing this
behavior. When you search for* 'attention'*, the query gets parsed to*'attent'
*whereas when you search for *'attenti'* it stays as it is because the
stemmer is not able to fit it anywhere.
Could you tell me what
12 matches
Mail list logo