How did you even get 60M objects into the bucket...?! The stuck requests
are only likely to be impacting the PG in which the bucket index is stored.
Hopefully you are not running other pools on those OSDs?

You'll need to upgrade to Jewel and gain the --bypass-gc radosgw-admin
flag, that speeds up the deletion considerably, but with a 60M object
bucket I imagine you're still going to be waiting quite a few days for it
to finish. Without this it's basically impossible.

We are actually working through this issue right now on an old 6M object
bucket. We got impatient and tried resharding the bucket index to speed
things up further but now the bucket rm is doing nothing. Waiting for
support advice from RH...

Cheers,

On 7 Jul. 2017 02:44, "Eric Beerman" <ebeer...@godaddy.com> wrote:

Hello,

We have a bucket that has 60 million + objects in it, and are trying to
delete it. To do so, we have tried doing:

radosgw-admin bucket list --bucket=<bucket>

and then cycling through the list of object names and deleting them, 1,000
at a time. However, after ~3-4k objects deleted, the list call stops
working and just hangs. We have also noticed slow requests for the cluster
most of the time after running that command when it hangs. We know there is
also a "radosgw-admin bucket rm --bucket=<bucket> --purge-objects" command,
but we are nervous that this will cause slowness in the cluster as well,
since the listing did - or that it might not work at all, considering the
list didn't work.

We are running Ceph version 0.94.3, and there is no bucket sharding on the
index.

What is the recommended way to delete a large bucket like that in
production, without occurring any downtime/slow requests?

Thanks,
- Eric

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to