On Wed, 15 Oct 2008, Gray Carper wrote:
> be good to set different recordsize paramaters for each one. Do you have any
> suggestions on good starting sizes for each? I'd imagine filesharing might
> benefit from a relatively small record size (64K?), image-based backup
> targets might like a pretty large record size (256K?), databases just need
> recordsizes to match their block sizes, and HPC...I have no idea. Heh. I
> expect I'll need to get in contact with the HPC lab to see what kind of
> profile they have (whether they deal with tiny files or big files, etc).
> What do you think?

Pretty much the *only* reason to reduce the ZFS recordsize from its 
default of 128K is to support relatively unusual applications like 
databases which do random read/writes of small (often 8K) blocks. 
For sequential I/O, 128K is fine even if the application (or client) 
does reads/writes using much smaller blocks.

For small-block random I/O you will find that ZFS performance improves 
immensely when the ZFS recordsize matches the application recordsize. 
The reason for this is that ZFS does I/O using its full blocksize and 
so there is more latency and waste of I/O bandwidth and CPU if ZFS 
needs to process a 128K block for each 8K block update.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to