15 maj 2008 kl. 09.46 skrev Michael McCandless:

Mark Miller wrote:
Its been months since i've tested this sort of thing, but from what I
remember there is a point where as you go higher, performance starts to very slowly drop. The point was lower than I'd expect, and def created
what looked like sweet spot settings.

This was my recollection as well, though, it's really quite application dependent. You should just test a spectrum and pick the best: let the computer tell you.

Sometimes my documents contains two million terms, sometimes they contain a few hundred. It is often so that a few hundred thousand documents in a row have aproximatly the same amount of terms. I belive that my cache size sweet spot is variable over time and depends on a number of factors that could be simulated rather easy: number of threads, size of index to merge the cache to, term saturation in cache and size of documents about to be added. Are there any other things I should look at?


       karl

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to