Hi Michal

> I know this is an issue for users with extremly large buckets.

Ah, good to know others are also feeling the pain ;-)

> For now,
> before the memory utilisation is optimised, I suggest to delete the
> objects in chunks,

Everything is in one massive bucket. The objects are smallish files
created by the quillen backup-to-S3 tool.
Is there any good way to delete those in chunks? E.g. saving all
object ids to a textfile and then iterating through that, deleting one
at a time?

When s3cmd reports
File s3://BUCKETNAME/1cc78b9209b129e9ab52a7a532ec4213dc7a7b05-667535 deleted
is that file is really deleted, or is it pending commit?

> Also give s3cmd-0.9.9-rc3-speedup release a try. It' in SourceForge ->
> Files -> testing. It should significantly trim down the runtime but
> doesn't work from behind a proxy.

I'm not behind a proxy. I'll give it a try.

Regards,
-- 
jean                                              . .. .... //\\\oo///\\

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
S3tools-general mailing list
S3tools-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s3tools-general

Reply via email to