https://docs.ceph.com/en/reef/rbd/rbd-snapshot/ should give you everything you 
need.

Sounds like maybe you have snapshots / clones that have left the parent 
lingering as a tombstone?

Start with

        rbd children volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28
        rbd info volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28
        rbd du volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28

That looks like the only volume in that pool?  If targeted cleanup doesn’t 
work, you could just delete the whole pool, but triple check everything before 
taking action here.


> On Sep 25, 2024, at 1:50 PM, bryansoon...@gmail.com wrote:
> 
> We have a volume in our cluster:
> 
> [r...@ceph-1.lab-a ~]# rbd ls volume-ssd
> volume-8a30615b-1c91-4e44-8482-3c7d15026c28
> 
> [r...@ceph-1.lab-a ~]# rbd rm 
> volume-ssd/volume-8a30615b-1c91-4e44-8482-3c7d15026c28
> Removing image: 0% complete...failed.
> rbd: error opening image volume-8a30615b-1c91-4e44-8482-3c7d15026c28: (2) No 
> such file or directory
> rbd: image has snapshots with linked clones - these must be deleted or 
> flattened before the image can be removed.
> 
> Any ideas on how can I remove the volume? Thanks
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to