ur
parallel threads right? ... otherwise that may explain the behavior you
are seing as well.
: >
: >
: > -Original Message-
: > From: Aigner, Thomas [mailto:[EMAIL PROTECTED]
: > Sent: Thursday, December 07, 2006 1:36 PM
: > To: java-user@lucene.apache.org
: > Subject: RE: Reading P
}
}
};
is.search(query, hc);
-Original Message-
From: Aigner, Thomas [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 07, 2006 1:36 PM
To: java-user@lucene.apache.org
Subject: RE: Reading Performance
Thanks Grant and Erik for your suggestions. I will try both of t
}
catch (Exception ex) {
}
}
};
is.search(query, hc);
-Original Message-
From: Aigner, Thomas [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 07, 2006 1:36 PM
To: java-user@lucene.apache.org
Subject: RE: Reading Performance
Thanks Grant and Erik fo
is.search(query, hc);
-Original Message-
From: Aigner, Thomas [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 07, 2006 1:36 PM
To: java-user@lucene.apache.org
Subject: RE: Reading Performance
Thanks Grant and Erik for your suggestions. I will try both of them and
let you know i
: Reading Performance
Have you done any profiling to identify hotspots in Lucene versus
your application?
You might look into the FieldSelector code (used in IndexReader) in
the Trunk version of Lucene could be used to only load the fields you
are interested when getting the document from disk
Well, the performance isn't bad considering you're executing the *search*
around 1,000 times...
One of the characteristics of a Hits object is that it's optimized for
getting the top 100 docs or so. To get the next 100 docs it re-executes the
query. Repeatedly . I'd try using a HitCollector o
Have you done any profiling to identify hotspots in Lucene versus
your application?
You might look into the FieldSelector code (used in IndexReader) in
the Trunk version of Lucene could be used to only load the fields you
are interested when getting the document from disk. This can be
us
Howdy all,
I have a question on reading many documents and time to do this.
I have a loop on the hits object reading a record, then writing it to a
file. When there is only 1 user on the Index Searcher, this process to
read say 100,000 takes around 3 seconds. This is slow, but can