I've got a very large bucket, holding something like 10-20M files.

running:

   s3cmd ls s3Url

Is not only taking forever(which is sort of to be expected with 10-20K 
separate requests), it's consuming lots of memory.

Glancing at the code in S3.bucket_list(), it looks like each LIST 
request's (python) list is being appended to the ultimately returned 
list, which is then dumped to STDOUT in one fell swoop. Are the options 
I'm not aware of to cause s3cmd to dump directly to STDOUT instead of 
buffering all the results? If not, then I guess this would be a feature 
request.

Thanks for a great tool!

Brad

------------------------------------------------------------------------------
Stay on top of everything new and different, both inside and 
around Java (TM) technology - register by April 22, and save
$200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco.
300 plus technical and hands-on sessions. Register today. 
Use priority code J9JMT32. http://p.sf.net/sfu/p
_______________________________________________
S3tools-general mailing list
S3tools-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s3tools-general

Reply via email to