Re: [S3tools-general] Out of memory: Kill process s3cmd - v1.5.0-beta1

2014-03-11 Thread WagnerOne
Thank you, Matt. Mike On Mar 10, 2014, at 7:33 PM, Matt Domsch wrote: > yes, --exclude will remove the whole directory (and its child files and > subdirs) from the run. At least from the local os.walk(); it can't do so > getting the list from S3. > > > On Mon, Mar 10, 2014 at 6:07 PM, W

Re: [S3tools-general] Out of memory: Kill process s3cmd - v1.5.0-beta1

2014-03-10 Thread Matt Domsch
yes, --exclude will remove the whole directory (and its child files and subdirs) from the run. At least from the local os.walk(); it can't do so getting the list from S3. On Mon, Mar 10, 2014 at 6:07 PM, WagnerOne wrote: > I've identified the subdir in my content to be transferred with the hug

Re: [S3tools-general] Out of memory: Kill process s3cmd - v1.5.0-beta1

2014-03-10 Thread WagnerOne
I've identified the subdir in my content to be transferred with the huge file count that I need to systematically transfer. Will --exclude allow me to sync everything but said directory, so I can then work within that subdir or will I hit the same memory related problems? In other words, if I -

Re: [S3tools-general] Out of memory: Kill process s3cmd - v1.5.0-beta1

2014-03-10 Thread WagnerOne
Thanks for the responses, folks. I appreciate your feedback! Mike On Mar 6, 2014, at 7:55 PM, Matt Domsch wrote: > Thanks for the kudos. Unfortunately, memory consumption is based on the > number of objects in the trees being synchronized. On a 32-bit system, it > tends to hit a python Memo

Re: [S3tools-general] Out of memory: Kill process s3cmd - v1.5.0-beta1

2014-03-07 Thread Matt Rogers
Maybe try syncing using wildcards on the filename? i.e. s3cmd sync /your/folder/a* s3://your-bucket s3cmd sync /your/folder/b* s3://your-bucket s3cmd sync /your/folder/c* s3://your-bucket s3cmd sync /your/folder/d* s3://your-bucket s3cmd sync /your/folder/e* s3://your-bucket etc. Note may have t

Re: [S3tools-general] Out of memory: Kill process s3cmd - v1.5.0-beta1

2014-03-06 Thread Matt Domsch
Thanks for the kudos. Unfortunately, memory consumption is based on the number of objects in the trees being synchronized. On a 32-bit system, it tends to hit a python MemoryError syncing trees that are ~1M files in size. You are hitting a kernel OOM well before that though. You have several op