unsubscribe
I used the following settings for speeding up indexing on a similarly sized
db table
If you have enough ram it might help you.
IndexWriter writer = *new* IndexWriter(fdDir,*new* StandardAnalyzer(), *true
*);
writer.setMergeFactor(100);
writer.setMaxMergeDocs(99);
writer.setMaxBufferedDocs(
by default)
if you want it to work iwthout tokenizing, you need to use something like
hte PerFieldAnalyzerWrapper, and the KeywordAnalyzer for hte city field
... the KeywordAnalyzer at query time will leave the query text
untokenized so it can match the untokenized value you indexed.
: Date: Sat, 4
plating like Velocity.
--
Chris Lu
-
Instant Full-Text Search On Any Database/Application
site: http://www.dbsight.net
demo: http://search.dbsight.com
On 11/4/06, James Rhodes <[EMAIL PROTECTED]> wrote:
> Has anyone successfully implemented a web services front end
on't break the query places you
don't expect. But if you index Austin UN_TOKENZED, then search for it
using,
say StandardAnalyzer, it'll look for austin and they won't match. This may
apply to Luke too. In Luke, you can choose a different analyzer
(WhitespaceAnalyzer comes to
Has anyone successfully implemented a web services front end to remotely
search a Lucene index? I've tried to do it with the Xfire stuff in
MyEclipse, but their default Aegis xml mapping stuff doesn't support the
Lucene Hits object. I'd like to avoid searching a remote index via RMI, but
for now i
I'm using the 2.0 branch and I've had issues with searching indexes where
the fields aren't tokenized.
For instance, my index consists of count,lastname,city,state and I used the
following code to index it (the data is in a sql server db):
*
if*(count != 0) {
doc.add(*new* Field("count", NumberU