Richard Jahnel wrote:
Any idea why? Does the zfs send or zfs receive bomb out part way through?
I have no idea why mbuffer fails. Changing the -s from 128 to 1536 made it take
longer to occur and slowed it down bu about 20% but didn't resolve the issue.
It just ment I might get as far as 2.5gb before mbuffer bombed with broken
pipe. Trying -r and -R with various values had no effect.
I found that where the network bandwidth and the disks' throughput are
similar (which requires a pool with many top level vdevs in the case of
a 10Gb link), you ideally want a buffer on the receive side which will
hold about 5 seconds worth of data. A large buffer on the transmit side
didn't help. The aim is to be able to continue steaming data across the
network whilst a transaction commit happens at the receive end and zfs
receive isn't reading, but to have the data ready locally for zfs
receive when it starts reading again. Then the network will stream, in
spite of the bursty read nature of zfs receive.
I recorded this in bugid
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6729347
However, I haven't verified the extent to which this still happens on
more recent builds.
Might be worth trying it over rsh if security isn't an issue, and then
you lose the encryption overhead. Trouble is that then you've got almost
no buffering, which can do bad things to the performance, which is why
mbuffer would be ideal if it worked for you.
I seem to remember reading that rsh was remapped to ssh in Solaris.
No.
On the system you're rsh'ing to, you will have to "svcadm enable
svc:/network/shell:default", and set up appropriate authorisation in
~/.rhosts
I heard of some folks using netcat.
I haven't figured out where to get netcat nor the syntax for using it yet.
I used a buffering program of my own, but I presume mbuffer would work too.
--
Andrew Gabriel
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss