I think there is still a bug here. When doing a bucket delete with 10,000
files in the bucket, it deletes some, but not all, the files. If you run
s3cmd del --recursive repeatedly, it eventually gets them all. Needs more
debugging...
On Thu, Apr 10, 2014 at 10:29 PM, Matt Domsch wrote:
> I'v
Thank you, Matt!
I am attempting to locate a sync job that will need to delete > 1000 objects.
The one sync job I was working on when I encountered this issue I had to work
around by deleting the target prefix altogether with aws cli in order to meet a
deadline.
I will certainly encounter it
I've worked with Mike offline on this some today, and believe I have a fix
now on the github.com/s3tools/s3cmd master branch.
The failure here was a timeout on a batch delete call to S3 that included
roughly 40,000 files in the single batch delete request, rather than the
1000 per request that the
Running this same sync in debug, I see additional detail following the "INFO:
Summary: ..." line.
I'm not sure what I should anonymize in that output, so I'd prefer to share it
with a dev only. I can produce that on request.
Mike
On Apr 10, 2014, at 3:21 PM, WagnerOne wrote:
> Hi,
>
> Encou