Vitaly Funstein [vfunst...@gmail.com] wrote:
> It's a bit of a guess on my part, but I did get better write and search
> performance with size <= 2K, as opposed to the default 16K.

For search that sounds plausible as that is very random access heavy and the 
disk cache will contain a larger amount of data actually needed with smaller 
blocks. For writes (assuming Solr writes, which are very bulk-oriented), it 
does not make sense that a smaller block size should be faster. Smaller block 
sizes means more overhead and leads to more fragmentation, both of which are 
anti-bulk.

The 2K does not always make sense BTW: Never harddrives used 4K as the smallest 
physical entity: http://en.wikipedia.org/wiki/Disk_sector

- Toke Eskildsen

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to