IncRef/DecRef is the best way to handle this: you have to ensure the
reader is not closed until 1) your app wants to close it (eg a reopen
has completed), and 2) every query in-flight, that had been using the
reader, has completed.
Lucene in Action 2 (NOTE: I'm a coauthor) has a class
(SearcherMan
but what about the case in which i am using fuzzy query matching. then the
highlighter package does not work.
On Sat, Feb 6, 2010 at 8:12 PM, Uwe Schindler wrote:
> There are two contrib packages for highlighting in the lucene distribution:
> highlighter and fast-vector-highlighter
>
> -
> U
It works.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Rohit Banga [mailto:iamrohitba...@gmail.com]
> Sent: Sunday, February 07, 2010 11:33 AM
> To: java-user@lucene.apache.org
> Subject: Re: hit high
Rohit,
what kind of problems are you facing with using fuzzy query and highlighting.
could you give us more details and maybe a small code snipped which
isolates you problem?
simon
On Sun, Feb 7, 2010 at 11:32 AM, Rohit Banga wrote:
> but what about the case in which i am using fuzzy query match
// list of cities that has been indexed
// each city name is a document
public static final String[] names = {"New Delhi", "Bangalore",
"Hyderabad",
"Mumbai", "Chennai", "Kolkata",
"Ahmedabad",
"Kanpur",
try
Query tq = new FuzzyQuery(new Term("name","mumbai"));
instead of
TermQuery tq = new TermQuery(new Term("name","mumbai"));
simon
On Sun, Feb 7, 2010 at 11:58 AM, Rohit Banga wrote:
>
> // list of cities that has been indexed
> // each city name is a document
> public s
it works!!! :)
could you also offer a suggestion for the following?
please have a look at the code above. it contains a list of cities that have
been added to the index.
// this is the code for indexing
void indexCities() throws Exception {
IndexWriter writer = new
IndexWriter(FSDir
Would you like to suggest me an example for implementing an analyzer with
parsing CamelCase !
I can overload methods with StopFilter PorterStemFilter, LowerCaseTokenizer
but with a new one different from these available filter I have not
solutions.
Thank you !
> Would you like to suggest me an
> example for implementing an analyzer with
> parsing CamelCase !
>
> I can overload methods with StopFilter PorterStemFilter,
> LowerCaseTokenizer
> but with a new one different from these available filter I
> have not
> solutions.
> Thank you !
You can use Word
Hi Ahmet,
I have ever known WordDelimiterFilterFactory, but never use Solr.
But how to download this class.
Can I use it in Lucene 3.0, or extends Analyzer with overloading its
methods.
Sorry If my questions are too details.
On Mon, Feb 8, 2010 at 1:11 AM, Ahmet Arslan wrote:
> > Would you like
> Hi Ahmet,
> I have ever known WordDelimiterFilterFactory, but never use
> Solr.
> But how to download this class.
http://repo1.maven.org/maven2/org/apache/solr/solr-core/1.4.0/
> Can I use it in Lucene 3.0, or extends Analyzer with
> overloading its
> methods.
It is not using new token stream
They are more details.
Thank you very much !
On Mon, Feb 8, 2010 at 1:37 AM, Ahmet Arslan wrote:
>
> > Hi Ahmet,
> > I have ever known WordDelimiterFilterFactory, but never use
> > Solr.
> > But how to download this class.
>
> http://repo1.maven.org/maven2/org/apache/solr/solr-core/1.4.0/
>
> >
Robert,
We are using TREC-3 data and Ad Hoc topics 151-200. The relevance judgments
list contains 97,319 entries, of which 68,559 are unique document ids. The
TIPSTER collection which was used in TREC-3 is around 750,000 documents.
Should we (a) index the entire 750,000 document collection
you should do (a), and pretend you know nothing about the relevance
judgements up front.
it is true you might make some change to your search engine and wonder, how
is it fair that I am bringing back possibly relevant docs that were never
judged (and thus scored implicitly as non-relevant)? i.e. t
Hi All,
I am back to this one after some while.
It appears the file system I was using resides on software RAID disks. I ran
the same code on the same Linux machine, but on another file system residing
on SCSI disks. I didn't observe the problem there.
Both file systems are ext3.
So I am guessing
15 matches
Mail list logo