Hi folks! Thanks a lot for this thread! Awesome interactions.

As Enrico suggested, I'll goes with the deep remove by comparing what is
existing and what's not, but honestly my goal when I wrote that message was
indeed to avoid having to touch data this way.

I already knew the answer in terms of how could I do it by suppressing data
which are basically orphaned as I already deeply explored the way data are
stored and all, but I was really hoping that a more "official" way to do it
was implemented.

I really think that a specific group of orphaned_data commands such as:

orphans plan (look for orphans)
orphans plan apply (remove actually found orphans)
orphans schedule (recurrent orphans plan scheduling)

That would at least give operators a way to cleanup things like
index/metadata with missing data, data with no longer existing
index/metadata etc but with a bit more confidence than a manual for loop
operation that is prone to error from humans.

All in all, it's great to see that I'm not the only one concerned with such
tasks :-)

Thanks a lot for all those answers!

PS: I'll do the cleaning manually and while doing that investigate to find
out the root cause of this situation as we only have one image with such
status over 2Pb of data.

PS2: This CEPH cluster do host an Openstack platform that itself host K8s
clusters and other various workloads, none of those workloads (k8s
included) access the ceph cluster directly, they all pass by the openstack
layer one way or another.

Le lun. 23 juin 2025 à 02:30, Reid Guyett <reid.guy...@gmail.com> a écrit :

> I have a similar problem with some RBDs being deleted but still appearing
> in ls. To remove the stuck rbd I do the following:
> 1. rbd -p <pool> <rbd_name> --debug_ms 1/1 2>&1 | grep header | tail
> 2. I can see that an OSD is listed in the output on the last line.
> 3. I restart that OSD
> 4. Delete the rbd
>
> I'm not sure if it is the same thing but it doesn't hurt to try.
>
>
> On Fri, Jun 13, 2025 at 4:34 AM Gaël THEROND <gael.ther...@bitswalk.com>
> wrote:
>
>> Hi Enrico, Thanks a lot for your answer, however I've already tried that
>> and it can't work as no data still exist of this image on the cluster and
>> the image info end up with the same error than previous command.
>>
>> I think my only remaining solution would be to eliminate this image
>> metadata from the metadata pool and index itself, but I'm not having a
>> clear procedure to do that yet as I can't just rm the image, I would need
>> to identify it within the metadata pool of this EC data pool.
>>
>> If have already tried that, I'm all ears out ;-)
>>
>> Le mer. 11 juin 2025 à 09:15, Enrico Bocchi <enrico.boc...@cern.ch> a
>> écrit :
>>
>> > Hi Gael,
>> >
>> > You may want to try this clean-up procedure (it involves some
>> > lower-level manual manipulation of rados objects):
>> >
>> > 1. List // Info the image
>> > rbd -p pool ls
>> > rbd -p pool info 00587c5a-8d54-40d1-b7fc-6aeb77d48a8a
>> > rbd image '00587c5a-8d54-40d1-b7fc-6aeb77d48a8a':
>> >      size 50 TiB in 13107200 objects
>> >      order 22 (4 MiB objects)
>> >      snapshot_count: 0
>> >      id: 9907dc2254ff2e
>> >      block_name_prefix: rbd_data.9907dc2254ff2e
>> >      format: 2
>> >      features: layering, exclusive-lock, object-map, fast-diff,
>> > deep-flatten
>> >      op_features:
>> >      flags:
>> >      create_timestamp: Thu Apr  6 08:24:25 2023
>> >      access_timestamp: Thu Apr  6 08:24:25 2023
>> >      modify_timestamp: Thu Apr  6 08:24:25 2023
>> >
>> > 2. Remove the header with rados commands
>> > rados -p pool rm rbd_id.00587c5a-8d54-40d1-b7fc-6aeb77d48a8a
>> > rados -p pool rm rm rbd_header.9907dc2254ff2e
>> >
>> > 3. Remove all the RBD data
>> > rados -p pool ls | grep '^rbd_data.9907dc2254ff2e.' | xargs rados -p
>> pool
>> > rm
>> >
>> > 4. Remove from RBD list
>> > rbd -p pool rm 00587c5a-8d54-40d1-b7fc-6aeb77d48a8a
>> >
>> > Cheers,
>> > Enrico
>> >
>> >
>> > On 6/10/25 08:47, Gaël THEROND wrote:
>> > > Hi there! Nope it is not in the trash pool.
>> > >
>> > > Le jeu. 5 juin 2025 à 12:37, Eugen Block<ebl...@nde.ag> a écrit :
>> > >
>> > >> Is that image in the trash?
>> > >>
>> > >> `rbd -p pool trash ls`
>> > >>
>> > >> Zitat von Gaël THEROND<gael.ther...@bitswalk.com>:
>> > >>
>> > >>> Hi folks,
>> > >>>
>> > >>> I've a quick question. On one of our pool we found out an image that
>> > >>> doesn't exist anymore physically (This image doesn't exist, have no
>> > snap
>> > >>> attached, is not parent of another image) but is still listed when
>> > >>> performing a `rbd -p pool ls`. However, it error with a nice "Error
>> > >> opening
>> > >>> image <image_name>: (2) No such file or directory" when we try to
>> > delete
>> > >> it
>> > >>> using `rbd -p pool rm <image_name>`.
>> > >>>
>> > >>> This pool is a EC based pool and neither the metadata nor the data
>> pool
>> > >>> have any data of that image remaining.
>> > >>>
>> > >>> So my question is, is there a cli way to force ceph to
>> forget/abandon
>> > >> this
>> > >>> image? A command which isn't involving manually manipulating the
>> > various
>> > >>> maps would be preferred but if I had to dig that deep I can.
>> > >>>
>> > >>> Thanks!
>> > >>> _______________________________________________
>> > >>> ceph-users mailing list --ceph-users@ceph.io
>> > >>> To unsubscribe send an email toceph-users-le...@ceph.io
>> > >>
>> > >> _______________________________________________
>> > >> ceph-users mailing list --ceph-users@ceph.io
>> > >> To unsubscribe send an email toceph-users-le...@ceph.io
>> > >>
>> > > _______________________________________________
>> > > ceph-users mailing list --ceph-users@ceph.io
>> > > To unsubscribe send an email toceph-users-le...@ceph.io
>> >
>> > --
>> > Enrico Bocchi
>> > CERN European Laboratory for Particle Physics
>> > IT - Storage & Data Management  - General Storage Services
>> > Mailbox: G20500 - Office: 31-2-010
>> > 1211 Genève 23
>> > Switzerland
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>> >
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to