On Wed, 16 Dec 2009, Bill Sprouse wrote:

I think one of the reasons they went to small recordsizes was an issue where they were getting killed with reads of small messages and having to pull in 128K records each time. The smaller recordsizes seem to have improved that aspect at least. Thanks for the pointer to the Dovecot notes.

This is likely due to insufficient RAM. Zfs performs very poorly if it is not able to cache full records in RAM but the (several/many) accesses are smaller than the record size.

Dovecot is clearly optimized for a different type of file system.

Something which is rarely mentioned is that zfs pools may be less fragmented on systems with lots of memory. The reason for this is that writes may be postponed to a time when there is more data to write (up to 30 seconds), and therefore more data is written contiguously or with a better layout. Synchronous write requests tend to defeat this, but perhaps using a SSD as an intent log may help so that synchronous writes to disk may also be deferred.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to