Thank you, Ian.
I tried the AND search you suggested and it is working fine.
Regards,
Raghu
IT Compliance Systems Development
BARCLAYS
70 Hudson ,
Jersey City, NJ
+1 201 499 6984 (O) +1 917 565 6276 (M)
raghavendra.k@barclays.com
-Original Message-
From: Ian Lea [mailto:ian
-
--
Email: wuqiu.m...@qq.com
--
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-to-get-payload-of-a-term-after-IndexSearch-search-tp4021789p4073708.html
Sent from the Lucene - Java Users mailing list archive at Nabble
Do continue to experiment with Solr as a "testbed" - all of the analysis
filters used by Solr are... part of Lucene, so once you figure things out in
Solr (using the Solr Admin UI analysis page), you can mechanically translate
to raw Lucene API calls.
Look at the standard tokenizer, it should
Oops... sorry, I just realized this was on the Lucene-user list. My response
was for Solr-ONLY!
-- Jack Krupansky
-Original Message-
From: Jack Krupansky
Sent: Thursday, June 27, 2013 1:11 PM
To: java-user@lucene.apache.org
Subject: Re: Language detection
You can use the LangDetectLa
I am working on an application that is using Tika to index text based documents
and store the text results in Lucene. These documents can range anywhere from
1 page to thousands of pages.
We are currently using Lucene 3.0.3. I am currently using the StandarAnalyzer
to index and search for the
You can use the LangDetectLanguageIdentifierUpdateProcessorFactory update
processor to redirect languages to alternate fields, and then set the
non-English fields to be "ignored". But, the document would still be
indexed, just without the redirected text fields.
(Examples of using that update
Hello,
is there some kind of a filter or component that I could use to filter
non-english text? I have a preprocessing step that I only want to index
English documents.
Best,
Gucko
Hi
We have recently upgraded from Lucene 3.6 to 4.3.1 and have encountered a
sometimes intermittent issue of IndexSearcher.search returning duplicate
documents (based on lucene doc no, not a custom field)
i.e.
TopDocs docs = IndexSearcher.search(query, filter, 10, sort)
assert docs.sco
Concatenating all your searchable fields into one is certainly what
I'd do. Simple and efficient.
And yes, you can perform range searches via the query parser - the
example you give matches the one in the docs at
http://lucene.apache.org/core/4_3_1/queryparser/org/apache/lucene/queryparser/classi