Hello,

We're using s3cmd to transfer large files in 10mb chunks.  About 80% of the
chunks work normally, taking about 90 seconds each (~10-11 MB/s).  But about
20% of the chunks fail with "(104, 'Connection reset by peer')", followed by
"Retrying on lower speed (throttle=0.01)".  The retry reliably works, but
the transfer time is then about 2600 seconds (300-400 KB/s).  These times
are very reliably reproducible.

It seems like the throttling is being overly aggressive, since it's almost
30x slower when retrying.  Looking at the code, I think the intent is to
sleep for a very short time after each block of send_chunk bytes is
written.  But with 10 MB, there are 256,000 chunks of 4k each, and sleeping
for 1/100 of a second each time means 2,560 seconds spent sleeping... which
exactly matches the times we observed.

Perhaps it should be sleeping for .001 seconds upon first retry, instead of
.01 seconds?  Even better might be to just retry once with no sleep and then
sleep for .001 seconds per chunk on the second retry.

Thanks,
Chris
------------------------------------------------------------------------------
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
_______________________________________________
S3tools-general mailing list
S3tools-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s3tools-general

Reply via email to