> WhitespaceAnalyzer is case sensitive. Is there a way
> to
> make it case insensitive?
You can build your custom analyzer using WhitespaceTokenizer + LowercaseFilter.
Source code of an existing analyzer will help you.
public TokenStream tokenStream(String fieldName, Reader reader) {
Whites
If I understood correctly, you should be done with MultiFieldQueryParser
Eduardo
-Original Message-
From: Mark Harwood [mailto:markharw...@yahoo.co.uk]
Sent: viernes, 09 de julio de 2010 09:30 a.m.
To: java-user@lucene.apache.org
Subject: Re: Searching docs with multi-value fields
Check
this works, however WhitespaceAnalyzer is case sensitive. Is there a way to
make it case insensitive?
On Sat, Jul 3, 2010 at 7:37 AM, Ahmet Arslan wrote:
> > I am using this analyzer:
> > @Analyzer(impl =
> > org.apache.lucene.analysis.standard.StandardAnalyzer.class)
> >
> > "$" is not inlclu
Today is your last chance to submit a CFP abstract for the 2010 Surge
Scalability Conference. The event is taking place on Sept 30 and Oct 1,
2010 in Baltimore, MD. Surge focuses on case studies that address
production failures and the re-engineering efforts that led to victory
in Web Application
Check out lucene 2454 and accompanying slide show if your reason for doing this
is modelling repeating elements.
On 9 Jul 2010, at 13:43, "Hans-Gunther Birken" wrote:
> I'm examining the following search problem. Consider a document with two
> multi-va
I'm examining the following search problem. Consider a document with two
multi-value fields.
Document doc = new Document();
doc.add(new Field("f1", "a1", Field.Store.YES, Field.Index.ANALYZED));
doc.add(new Field("f1", "a2", Field.Store.YES, Field.Index.ANALYZED));
doc.add(new Field("f1", "
(10/07/09 19:30), manjula wijewickrema wrote:
Uwe, thanx for your comments. Following is the code I used in this case.
Could you pls. let me know where I have to insert UNLIMITED field length?
and how?
Tanx again!
Manjula
Manjula,
You can set UNLIMITED field length to IW constructor:
http
Thanx
On Fri, Jul 9, 2010 at 1:10 PM, Uwe Schindler wrote:
> > Thanks for your valuble comments. Yes I observed tha, once the number of
> > terms of the goes up, fieldNorm value goes down correspondingly. I think,
> > therefore there won't be any default due to the variation of total number
> of
Uwe, thanx for your comments. Following is the code I used in this case.
Could you pls. let me know where I have to insert UNLIMITED field length?
and how?
Tanx again!
Manjula
code--
*
public* *class* LuceneDemo {
*public* *static* *final* String *FILES_TO_INDEX_DIRECTORY* = "filesToIndex"
;
*
> Thanks for your valuble comments. Yes I observed tha, once the number of
> terms of the goes up, fieldNorm value goes down correspondingly. I think,
> therefore there won't be any default due to the variation of total number
of
> terms in the document. Am I right?
With the current scoring model
Maybe you have MaxFieldLength.LIMITED instead of UNLIMITED? Then the number
of terms per document is limited.
The calculation precision is limited by the float norm encoding, but also if
your analyzer removed stop words, so the norm is not what you exspect?
-
Uwe Schindler
H.-H.-Meier-Allee 6
CLucene is a complete port of Java Lucene to C++, and it has a Perl
bindings, although I'm not sure how up to date it is - you'll have to
check with its author. CLucene development branch currently supports the
Lucene 2.3.2 API and index format.
See http://clucene.sourceforge.net/ for more det
Hi,
I run a single programme to see the way of scoring by Lucene for single
indexed document. The explain() method gave me the following results.
***
Searching for 'metaphysics'
Number of hits: 1
0.030706111
0.030706111 = (MATCH) fieldWeight(contents:metaphys in 0), product of:
13 matches
Mail list logo