On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel <rich...@ellipseinc.com> wrote:
> I've tried ssh blowfish and scp arcfour. both are CPU limited long before the 
> 10g link is.
>
> I'vw also tried mbuffer, but I get broken pipe errors part way through the 
> transfer.
>
> I'm open to ideas for faster ways to to either zfs send directly or through a 
> compressed file of the zfs send output.

If this is across a trusted link, have a look at the HPN patches to
ZFS.  There are three main benefits to these patches:
  - increased (and dynamic) buffers internal to SSH
  - adds a multi-threaded aes cipher
  - adds the NONE cipher for non-encrypted data transfers
(authentication is still encrypted)

If one end of the SSH connection is HPN-enabled, you can increase your
bulk data transfer around 10-15%, just by adjusting the buffer size in
ssh_config or sshd_config (depending on which side has HPN).

If both ends of the SSH connection are HPN-enabled, you can increase
your bulk data transfer rate around 30% just by adjusting the buffer
in the sshd_config.

Enabling the -mtr versions of the cipher will use multiple CPU cores
for encrypting/decrypting, improving throughput.

If you trust the link completely (private link), you can enable the
NONE cipher on the ssh commandline and via sshd_config, and the data
transfer will happen unencrypted, thus maxing out the bandwidth.

We can saturate a gigabit fibre link between two ZFS storage servers
using rsync.  You should be able to saturate a 10G link using zfs
send/recv, so long as both the systems can read/write that fast.

http://www.psc.edu/networking/projects/hpn-ssh/

-- 
Freddie Cash
fjwc...@gmail.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to