Matty writes:
 > Are there any plans to support record sizes larger than 128k? We use
 > ZFS file systems for disk staging on our backup servers (compression
 > is a nice feature here), and we typically configure the disk staging
 > process to read and write large blocks (typically 1MB or so). This
 > reduces the number of I/Os that take place to our storage arrays, and
 > our testing has shown that we can push considerably more I/O with 1MB+
 > block sizes.
 > 

So  other  FS and raw  devices  clearly benefit  from larger
blocksize but the way ZFS schedule such I/Os, I don't expect
anymore more throughput from bigger blocks.

Maybe you're hitting something else that limits throughput ?

-r


 > Thanks for any insight,
 > - Ryan
 > -- 
 > UNIX Administrator
 > http://prefetch.net
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to