Wither it is efficient or not to send the compressed or uncompressed
data depends on a lot of factors.
If the data is already in the ARC for some other reason then it is
likely much more efficient to use that because sending the compressed
blocks involves doing IO to disk. Reading the version from the in
memory ARC does not.
If the data is in the L2ARC that is still better than going out to the
main pool disks to get the compressed version.
Reading from disk is always slower than reading from memory.
Depending on what your working set of data in the ARC is and the size of
the dataset you are sending it is possible that the 'zfs send' will
cause data that was in the ARC to be evicted to make room for the blocks
that 'zfs send' needs. This is a perfect use case for having a large
L2ARC if you can't fit your working set and the blocks for the 'zfs
send' into the ARC.
If you are using incremental 'zfs send' streams the chances of you
thrashing the ARC are probably reduced, particularly if you do them
frequently enough so that they aren't too big.
I know people have monitored the ARC hit rates when doing large zfs
sends. Using the DTrace Analytics in an SS7000 makes this very easy.
It really comes down to the size of your working set in the ARC, the
size of your L2ARC and your pattern of data access all that combined
with the volumen of data you are 'zfs send'ing.
--
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss