Thank you,
the names are correct though, the entries under /dev include them (and a
ls -la shows that these /dev/mapper/ceph--(...) are symlinks to /dev/dm-XY).
I have also tried to mount them in the shell container, so ran cephadm
shell -m /dev/mapper/ceph-(...):/dev/mapper/ceph-(...) [repeated for
all], but in this case the error becomes
--> RuntimeError: No udev data could be retrieved for
/dev/mapper/ceph--(...)
There is some confusion, I agree, ; the problem is that cephadm set this
up automatically and now I'm trying to debug why "ceph-volume inventory"
doesn't work
--
Francesco Di Nucci
System Administrator
Compute & Networking Service, INFN Naples
Email: [email protected]
On 2025-11-10 16:21, Tim Holloway wrote:
I have not, but if memory serves, a double-dash in an LV name is used
as an escape for something different. Look more closely at the
"spellings".
Though I think you may have confusion between what is a LV and what is
a PV.
On my (AlmaLinux) server, ceph-volume inventory shows devices like
'/dev/nvme01', which is an LVM PV, and the OSD is known
as ´/dev/mapper/nvme/osd7' where nvme is the vgname. So no fsids or
UUIDs in the names.
On 11/10/25 09:22, Francesco Di Nucci wrote:
Hi,
I have provisioned a Ceph cluster with cephadm and I'm having issues
with "ceph-volume inventory" on OSD nodes.
It fails with errors like
# ceph-volume inventory
--> RuntimeError:
/dev/mapper/ceph--009df43a--5864--43ca--9507--59a9c1207e08-osd--block--410f9c5c--c83b--4208--9060--fffd4829ae96
not found.
Even though that device exists under /dev/mapper
I've tried rebooting and node creation with "vgscan --mknodes" but
the result is still the same, but nothing. Has anyone encountered the
same errors?
Thanks in advance
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]