Thanks Rob - I'm already splitting my 8GB file into 1GB chunks, but I'll
try making the chunks smaller!
I've found S3 to be more unreliable than I'd hoped...(mostly due to
connection errors on Amazon's side)
Any code that uses S3 has to have really good retry logic! (our web
application store
I learned that this just happens from time to time. The suggested
solution was to break the file into pieces so that failures don't set
you back hours. Now I break my 4GB file into 0.5 GB chunks. I use a
script to split the file and send the chunks, retrying on failure. It's
been flawless for w
Thank you both - Jeff and Michal.
I guess I'll have to write a more robust archiving script.
But I'll be ready when my database grows to over 5 gig :)
On Fri, 2010-01-29 at 11:55 +1300, Michal Ludvig wrote:
> Hi Rob,
>
> > Unfortunately I've been having a lot of "Connection reset by peer"
> >
Rob Scala wrote:
> I've been using s3cmd for a while, and I haven't needed to use this
> mailing list until now. It's been a great tool.
>
> Unfortunately I've been having a lot of "Connection reset by peer"
> problems recently, and I don't know where the problem lies.
>
> I upload a 4.5 GB file
Hi Rob,
> Unfortunately I've been having a lot of "Connection reset by peer"
> problems recently, and I don't know where the problem lies.
> [...]
> Could it be Amazon's problem, or a network problem?
On some days this happens to me as well, even on files as small as
100MB. After a couple of hour