On 2013-01-17 16:04, Bob Friesenhahn wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K?  This seems like the most obvious improvement.

Matching the volume block size to what the clients are actually using
(due to their filesystem configuration) should improve performance
during normal operations and should reduce the number of blocks which
need to be sent in the backup by reducing write amplification due to
"overlap" blocks..


Also, it would make sense while you are at it to verify that the
clients(i.e. VMs' filesystems) do their IOs 4KB-aligned, i.e. that
their partitions start at a 512b-based sector offset divisible by
8 inside the virtual HDDs, and the FS headers also align to that
so the first cluster is 4KB-aligned.

Classic MSDOS MBR did not warrant that partition start, by using
63 sectors as the cylinder size and offset factor. Newer OSes don't
use the classic layout, as any config is allowable; and GPT is well
aligned as well.

Overall, a single IO in the VM guest changing a 4KB cluster in its
FS should translate to one 4KB IO in your backend storage changing
the dataset's userdata (without reading a bigger block and modifying
it with COW), plus some avalanche of metadata updates (likely with
the COW) for ZFS's own bookkeeping.

//Jim

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to