It seems the issue is indeed in the ssh layer. scp has the same issue and some 
work has been done in “fixing” that:

http://www.psc.edu/index.php/hpn-ssh

>From the papers abstract:

        SCP and the underlying SSH2 protocol implementation in OpenSSH is 
network performance limited by statically defined internal flow control 
buffers. These buffers often end up acting as a bottleneck for network 
throughput of SCP, especially on long and high bandwith network links.

ASE

On Jul 7, 2014, at 8:00 PM, A <publicf...@bak.rr.com> wrote:

> I'm going to be interested in the answer to this question one of these days 
> real soon, so I did some googling and found this blog:
> 
> http://gergap.wordpress.com/tag/rsync/
> 
> He talks about eliminating ssh and just using rsyncs own protocol to speed up 
> the transfer.  I read elsewhere that the encryption is what's slowing it down 
> - which makes sense to me.
> 
> Disclaimer: I don't know anything about the author, the blog, rsync or sshd 
> other than what I read for the most part.   So if someone else comments with 
> other solutions, they almost definitely know more about it than I do.
> 
> P.S.  The author makes an appeal to developers  I hope rsync will become more 
> smart in the future and allows the combination of “--inplace --sparse” or can 
> even autodetect the best strategy." in regard to KVM files - which sounds 
> good to me.  YMMV
> 
> 
> On 07/07/2014 01:18 PM, Adam Edgar wrote:
>> I am trying to transfer a group of files in whole using rsync using the 
>> —files-from option across a network with high bandwidth but relatively high 
>> latency. When I log into the remote machine I see an rsync command running 
>> like thus:
>> 
>> rsync --server --sender -Re.sf -B16384 --files-from=- --from0 . /
>> 
>> Note that I have used the -B option to increase block size but that seems to 
>> just apply to the blocks used for checksumming instead of actual network 
>> I/O. I came to this conclusion by running strace on the PIDs on both remote 
>> and local hosts. On the server side I got a stream of 4K writes on fd 1 
>> followed by a select on that same fd :
>> 
>> write(1, 
>> "\275\214\357\357\22y4\237\215Z\226\331\355Z!\340\373\227\340V~\20\35C\3\337_\27\257\236\321\204"...,
>>  4092) = 4092
>> select(2, NULL, [1], [1], {60, 0})      = 1 (out [1], left {59, 999996})
>> 
>> I’d like to see if I can increase the write block size to cut down on the 
>> delay of each select to this unnamed pipe to sshd.  I’m not sure if that 
>> would help speed up the transfer as the issue may be below at the sshd level 
>> (which I can’t strace as a non-root user).  If anyone has experience with 
>> slowness at the ssh level please advice on which flags would allow me to 
>> speed up there. 
>> 
>> Any help will be much appreciated.
>> 
>> Adam S Edgar
>> 
> 
> -- 
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: 
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to