Hi Jean,

> I'm deleting a bucket with many objects like this:
> $ ./s3cmd --verbose --progress rb --force s3://BUCKETNAME
> 
> Memory use is at 1.8GB resident now.

I know this is an issue for users with extremly large buckets. For now,
before the memory utilisation is optimised, I suggest to delete the
objects in chunks, for example if "s3cmd ls s3://BUCKETNAME/" gives you
s3://BUCKETNAME/folder1
s3://BUCKETNAME/folder2
s3://BUCKETNAME/folder3
you can do:
s3cmd --recursive del s3://BUCKETNAME/folder1
once it's done:
s3cmd --recursive del s3://BUCKETNAME/folder2
etc.

That should make it much faster.

Also give s3cmd-0.9.9-rc3-speedup release a try. It' in SourceForge ->
Files -> testing. It should significantly trim down the runtime but
doesn't work from behind a proxy.

Michal

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
S3tools-general mailing list
S3tools-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s3tools-general

Reply via email to