Ilya -

I don't think images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b is the problem; 
it was just the last RBD image listed in the log before the crash. The commands 
you suggested work fine when using that image:

[root@os-storage ~]# rbd info images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b
rbd image '144ebab3-b2ee-4331-9d41-8505bcc4e19b':
        size 0 B in 0 objects
        order 23 (8 MiB objects)
        snapshot_count: 1
        id: f01052f76969e7
        block_name_prefix: rbd_data.f01052f76969e7
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features: 
        flags: 
        create_timestamp: Mon Feb 12 17:50:54 2024
        access_timestamp: Mon Feb 12 17:50:54 2024
        modify_timestamp: Mon Feb 12 17:50:54 2024
[root@os-storage ~]# rbd diff --whole-object 
images-pubos/144ebab3-b2ee-4331-9d41-8505bcc4e19b

The other 2 images, related to 2 OpenStack volumes stuck in "error_deleting" 
state, appear to be the cause of the problem:

[root@os-storage ~]# rbd info 
volumes-gpu/volume-28bbca8c-fec5-4a33-bbe2-30408f1ea37f
rbd: error opening image volume-28bbca8c-fec5-4a33-bbe2-30408f1ea37f: (2) No 
such file or directory

[root@os-storage ~]# rbd diff --whole-object 
volumes-gpu/volume-28bbca8c-fec5-4a33-bbe2-30408f1ea37f
rbd: error opening image volume-28bbca8c-fec5-4a33-bbe2-30408f1ea37f: (2) No 
such file or directory

[root@os-storage ~]# rbd info 
volumes-gpu/volume-ceef52d4-26c5-45dd-a10d-79584d9091e7
rbd: error opening image volume-ceef52d4-26c5-45dd-a10d-79584d9091e7: (2) No 
such file or directory

[root@os-storage ~]# rbd diff --whole-object 
volumes-gpu/volume-ceef52d4-26c5-45dd-a10d-79584d9091e7
rbd: error opening image volume-ceef52d4-26c5-45dd-a10d-79584d9091e7: (2) No 
such file or directory

The account that owns these volumes is for a student using OpenStack from 
Spring semester, so we were going to try cleaning up (deleting) the account and 
the contents, unless there's information you want me to gather. I'm happy to 
help collect whatever information might be needed.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to