On 17.11.2018 07:40, Alex Kempshall via S3tools-general wrote:

> For a number of years I've been using s3cmd, from SlackBuild.org, to upload 
> to Amazon S3. The other night it failed, I reran the command and it got a bit 
> further. Reran the command and it got a bit further. 
> 
>> s3cmd sync --delete-removed --limit-rate=36k 
>> /mnt/southsea/amazon_drive_encrypted/alex/Documents/ 
>> s3://mcmurchy1917-MyDocuments
> 
> The failure message I get is 
> 
>> ERROR: 
>> Upload of 
>> '/mnt/southsea/amazon_drive_encrypted/alex/.thunderbird/pxgvw4yz.default/global-messages-db.sqlite'
>>  part 4 failed. Use
>> /usr/bin/s3cmd abortmp 
>> s3://mcmurchy1917-thunderbird/pxgvw4yz.default/global-messages-db.sqlite 
>> DQ35UbS6CFcwaHk97RuT7oEIYp6U.iA6XLdIzePSQHgMQ76hBhjGMKP_D9YbCPy4lemnYdscfdepGgvA8hi87g--
>> to abort the upload, or
>> /usr/bin/s3cmd --upload-id 
>> DQ35UbS6CFcwaHk97RuT7oEIYp6U.iA6XLdIzePSQHgMQ76hBhjGMKP_D9YbCPy4lemnYdscfdepGgvA8hi87g--
>>  put ...
>> to continue the upload.
>> ERROR: S3 error: 400 (RequestTimeout): Your socket connection to the server 
>> was not read from or written to within the timeout period. Idle connections 
>> will be closed.
> 
> If I retry sometimes the command is successful other times not. 
> 
> I changed the command NOT to limit the upload rate 
> 
>> s3cmd sync --delete-removed 
>> /mnt/southsea/amazon_drive_encrypted/alex/Documents/ 
>> s3://mcmurchy1917-MyDocuments
> 
> Still causing problems. 
> 
> I then noticed that the common factor was that the problem seemed to always 
> involve files that were split into multi parts so I changed the command to 
> decrease the multipart-chunk-size-mb to 5MB 
> 
>> s3cmd sync --delete-removed --multipart-chunk-size-mb=5 
>> /mnt/southsea/amazon_drive_encrypted/alex/Documents/ 
>> s3://mcmurchy1917-MyDocuments
> That seemed to fix the problem. 
> 
> Can anyone explain what's going on here? Is it my connection that is slow or 
> causing corruption? Has something changed in the last couple of weeks at 
> Amazon? 
> 
> Thanks in anticipation of an explanation. 
> 
> Alex

In my experience, it's best to split a large file up into whatever chunk
size you can upload to S3 in 1 minute to avoid that socket time out
error.  From my server on a pretty good pipe, I use
--multipart-chunk-size-mb=100. 

Jeff
_______________________________________________
S3tools-general mailing list
S3tools-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s3tools-general

Reply via email to