Stu,
  I ran into similar circumstances when helping some folks transfer
  data from FermiLab(Chicago) to Renater(France).
  What options are you using for rsync?

  The buffer tuning you refer to is actually tuning two different things.

  The rsync TCP-buffer options allow you to set TCP buffers.
  You've always been able to do this on the daemon-side (via a config file),
  and the patch added (via this email's subject line) also allows you
to set buffers on the client. (I have not tested this, but Wayne described it as
  straightforward, so I assume it's correct.  When I worked on it,
  I used web100-instrumented Linux boxes that let me tweak the buffers
  during the transfer).

  The ssh fix (Chris Rapier/PSC) addresses an ssh issue, where ssh implements
  a windowing-protocol **on top of TCP**.  So even if you get the TCP buffers
  "right", rsync performance will be dismal **if** you use "-e ssh" or similar
  to encrypt the ssh transfer.

  So (and maybe you've already done this), when using "-e ssh", you need to:
  1. use an ssh-patched system
  2. set TCP buffers on sender (at least Tx buff on that side)
  3. set TCP buffers on receiver (at least Rx buff on that side).

  When I did that, I got comparable performance from rsync and iperf;
  the transfer seemed to be disk i/o limited (on the receiver side, which
  had cheaper disk i/o  - about 20-30MBytes/sec when there's no loss,
  so  congestion control wasn't kicking in).

Also, if you're using Linux, kernels from about 2.6.8+ now includes both sender-side
  and receiver-side TCP buffer autotuning, so as long as you have large enough
  max-TCP-buffers, the systems will self-adapt their tcp-buffers,
  taking care of "2" and "3" above.  The patch that Wayne did is really
  to help folks with systems that don't do autotuning (pretty much
  everything except <1yr-old Linux kernels).

  Feel free to drop me a note (unicast?) if you'd like to discuss further.
I **think** you should be able to get a better rate w/o further changes to rsync.
  But I could be wrong... ;-)

Larry
--

At 2:50 PM +0800 5/4/06, Stuart Midgley wrote:
We see absolutely dismal performance from Canberra to Perth via Aarnet or Grangenet (gig connections across the country). With standard rsync on a tuned tcp stack, we see about 700k/s. I started playing with the --sockopts and have increased the performance to 1.4M/s which is better, but still way off the pace.

There are similar patches for ssh at

        http://www.psc.edu/networking/projects/hpn-ssh/

which work wonders (up to 25M/s where previously we were at 700k/s).

I would love to see similar performance gains from rsync as I've see from the tuned ssh...

Stu.


--
Dr Stuart Midgley
Industry Uptake Program Leader
iVEC, 'The hub of advanced computing in Western Australia'
26 Dick Perry Avenue, Technology Park
Kensington WA 6151
Australia

Phone: +61 8 6436 8545
Fax: +61 8 6436 8555
Email: [EMAIL PROTECTED]
WWW:  http://www.ivec.org



--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to