Hi,
As a starter, you might find http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Debugging-Relevance-Issues-Search
useful.
The key thing to do first is use Lucene's built in explain method to
see why any particular document scores the way it does, then work from
t
Hi,
In the search application I'm working on I would like to prevent the
user from getting always the same search results for a certain query,
but without affecting results quality too much.
In order to do so I'm processing the hits in smaller chunks and doing
some random shuffle inside the
Is there an "elegant" approach to partitioning a large Lucene index (~1TB)
into smaller sub-indexes other than the obvious method of re-indexing into
partitions?
Any ideas?
Thanks,
Shashi
Maybe you can adjust your ranking algorithm. For example, rank the most
recent results higher?
--
Chris Lu
-
Instant Scalable Full-Text Search On Any Database/Application
site: http://www.dbsight.net
demo: http://search.dbsight.com
Lucene Database Search in 3 minutes:
ht