On Fri, Mar 12, 2010 at 11:01 AM, Ivan Provalov <iprov...@yahoo.com> wrote:
> Just to follow up on our previous discussion, here are a few runs in which we 
> have tested some of the Lucene different scoring mechanisms and other 
> options.  We used Lucene's patches for LnbLtcSimilarity and BM25 and contrib 
> module for the SweetSpotSimilarity.
>
> Lucene Default: 0.149
> Lucene BM25:    0.168
> SweetSpotSimilarity (Min: 10; Max: 1000; Steepness: 0.2): 0.173
> LnbLtcSimilarity (Pivot Norm + TF Default; Avg # of Terms: 450; slope: 0.25): 
>   0.184
> LnbLtcSimilarity (Pivot Norm + TF Log; Avg # of Terms: 450; slope: 0.25):     
>   0.186
> Lucene With Stemmer: 0.202
> Lucene With Lexical Affinities + Phrase Expansion + Stemmer: 0.21

Ivan, thanks for reporting back. Its more evidence that its worth our
trouble to support additional scoring models.

-- 
Robert Muir
rcm...@gmail.com

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

  • TREC-3 Runs Ivan Provalov
    • Re: TREC-3 Runs Robert Muir

Reply via email to