Hi,

Unfortunately this doesn’t seem to be the same problem we’re experiencing. We 
have no snapshots on pool or on specific object:
rados lssnap -p default.rgw.buckets.data
0 snaps

rados -p default.rgw.buckets.data listsnaps 
5a5c812a-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
 
error listing snap shots 
default.rgw.buckets.data/5a5c812a-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6:
 (2) No such file or directory

Regards,
James.

> On 29 Jan 2021, at 09:47, Bartosz Skotnicki <bartosz.skotni...@tessel.pl> 
> wrote:
> 
> Hi,
> 
> I have the same problem with octopus 15.2.8
> 
> check if you have snapshots of storage pool, check if you have snapshots of 
> object:
> 
> rados -p default.rgw.buckets.data lssnap
> 
> rados -p default.rgw.buckets.data listsnaps object_name
> 
> 
> In my case only object have snaps. I found way to delete it but you need to 
> check object location (pg and osd) and then after stopping osd manualy remove 
> it from all osd. After this you will have inconsistent pg but you can fix 
> this by repairing pg. I'm not sure if this is good way of fixing this 
> problem. In my case I'm looking for other solution which will be faster ( 
> removing 1 object was taking about 1-2min per osd on hdd drive)
> 
> 
> Best regards
> 
> Bartosz Skotnicki
> 
> 
> 
> ________________________________
> Od: James, GleSYS <james.mce...@glesys.se>
> Wysłane: piątek, 29 stycznia 2021 08:45:24
> Do: ceph-users
> Temat: [ceph-users] Can see objects with "rados ls" but cannot delete them 
> with "rados rm"
> 
> Hi,
> 
> We have in issue in our cluster (octopus 15.2.7) where we’re unable to remove 
> orphaned objects from a pool, despite the fact these objects can be listed 
> with “rados ls”.
> 
> Here is an example of an orphaned object which we can list (not sure why 
> multiple objects are returned with the same name…related to the issue 
> perhaps?)
> 
> rados ls -p default.rgw.buckets.data | grep -i 
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> 
> And the error message when we try to stat / rm the object:
> 
> rados stat -p default.rgw.buckets.data 
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> error stat-ing 
> default.rgw.buckets.data/5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6:
>  (2) No such file or directory
> rados -p default.rgw.buckets.data rm 
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
> error removing 
> default.rgw.buckets.data>5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6:
>  (2) No such file or directory
> 
> The bucket with id "5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83” was 
> deleted from radosgw a few months ago, but we still have approximately 
> 450,000 objects with this bucket id that are orphaned:
> 
> cat orphan-list-202101191211.out | grep -i 
> 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83 | wc -l
> 448683
> 
> I can also see from our metrics that prior to deletion there was about 10TB 
> of compressed data stored in this bucket, and this has not been reclaimed in 
> the pool usage after the bucket was deleted.
> 
> Anyone have any suggestions on how we can remove these objects and reclaim 
> the space?
> 
> We’re not using snapshots or cache tiers in our environment.
> 
> Thanks,
> James.
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> ________________________
> This email was scanned by Tessel AntiVirus System.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to