Hi Dev's

With the failures of the previous versions in the buckets due to the shardings.

We have started a copy of the buckets to new buckets to clean our ceph cluster.

After synchronizing the bucket with the AWS cli, we are in the phase of 
deleting the old buckets.

We have tried unsuccessfully: radosgw-admin bucket rm --bucket = XXXX 
--purge-objects

Result is a loop that shows with the debug: "NOTE: unable to find part (s) of 
aborted multipart upload of [object] .meta"

After seeing this failure we have tried to clean up using "rados".

For this we have listed all the objects belonging to the pool bucket using its 
marker_id.

Once done, we send them a script to delete massively, with "rados -p 
[rgw-pool.data] rm [object]"

The result of all is similar to the following:

rados -p default.rgw.buckets.data rm 
48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-3369403d-e0bf-45e3-89ba-614b6d390dc5/CBB_BIM-EURODG/CBB_DiskImage/Disk_00000000-0000-0000-0000-000000000000/Volume_NTFS_00000000-0000-0000-0000-000000000001$/20200104230152/131.cbrevision.5K5_leiUZoHQsjBvUxw2QbM1WQPQLlc

error removing 
default.rgw.buckets.data>48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-3369403d-e0bf-45e3-89ba-614b6d390dc5/CBB_BIM-EURODG/CBB_DiskImage/Disk_00000000-0000-0000-0000-000000000000/Volume_NTFS_00000000-0000-0000-0000-000000000001$/20200104230152/131.cbrevision.5K5_leiUZoHQsjBvUxw2QbM1WQPQLlc:
 (2) No such file or directory


Any idea or better way to clean the cluster of these objects?

Our estimates are 100TB of wrong objects.

Regards
Manuel

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to