Hi Ilya,

Thanks for the tip! Altough it seems less 'good practice' and I'm worried about 
stability because ceph gives this output:
rbd: mapping succeeded but /dev/rbd0 is not accessible, is host /dev mounted?
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (22) Invalid argument

If I unmap to quickly in the same cephadm shell instance, the unmap exits with 
an error saying the /dev/rbd0 doesn’t exists.

But the mapping is succeeded and the image is accessible. Unmapping the image 
in a new instance works.

I found this article: https://bugzilla.redhat.com/show_bug.cgi?id=1872879

noudev options is not really made for my situation. It feels like a 'dirty fix' 
and I'm worried using this option will accumulate 'hanging tasks' is the 
system...

Other idea's are welcome. I'll add this option to my scripts and if I encounter 
any surprises, Il get back to this thread.

Dominique.

> -----Oorspronkelijk bericht-----
> Van: Ilya Dryomov <idryo...@gmail.com>
> Verzonden: vrijdag 25 april 2025 9:05
> Aan: Dominique Ramaekers <dominique.ramaek...@cometal.be>
> CC: ceph-users@ceph.io
> Onderwerp: Re: [ceph-users] rbd commands don't return to prompt
> 
> On Thu, Apr 24, 2025 at 11:29 PM Dominique Ramaekers
> <dominique.ramaek...@cometal.be> wrote:
> >
> > Hi,
> >
> > Weird thing happened with my ceph. I've got nice nightly scripts (bash
> scripts) for making backups, snapshots, cleaning up etc... Starting from my
> last upgrade to ceph v19.2.2 my scripts hang during execution. The rbd map
> and rbd unmap commands doesn't return to the prompt. So my script
> invoces a command like "cephadm shell -- rbd --pool libvirt-pool map --read-
> only --image CmsrvXCH2-SWAP@snap_4" but my script doesn't continue
> because the rbd map command never quits.
> >
> > I've also tried it myself. After killing my script a few times, I saw 
> > several
> images being mapped. So let's unmap them. This happened...
> > root@hvs001:/# rbd unmap /dev/rbd0
> > ^C
> > root@hvs001:/# rbd unmap /dev/rbd1
> > ^C
> >
> > The devices aren't mounted so the unmapping isn't blocked. Strangely
> enough after activating the unmap command, the device is removed from
> the /dev but I still have to do a ^C to return to the shell...
> >
> > Do I have an issue with my ceph cluster? Has anybody experienced
> something similar? Should I report a bug?
> 
> Hi Dominique,
> 
> Can you try appending "--options noudev" to "rbd map" and "rbd unmap"
> commands?
> 
> Thanks,
> 
>                 Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to