I just checked an OSD and the "block" entry is indeed linked to storage using a /dev/mapper uuid LV, not a /dev/device. When ceph builds an LV-based OSD, it creates a VG whose name is "ceph-uuuuu", where "uuuu" is a UUID, and an LV named "osd-block-vvvv", where "vvvv" is also a uuid. So although you'd map the osd to something like /dev/vdb in a VM, the actual name ceph uses is uuid-based (and lvm-based) and thus not subject to change with alterations in the hardware as the uuids are part of the metadata in VGs and LVs created by ceph.

Since I got that from a VM, I can't vouch for all cases, but I thought it especially interesting that a ceph was creating LVM counterparts even for devices that were not themselves LVM-based.

And yeah, I understand that it's the amount of OSD replicate data that counts more than the number of hosts, but when an entire host goes down and there are few hosts, that can take a large bite out of the replicas.

   Tim

On 4/11/25 10:36, Anthony D'Atri wrote:
I thought those links were to the by-uuid paths for that reason?

On Apr 11, 2025, at 6:39 AM, Janne Johansson <icepic...@gmail.com> wrote:

Den fre 11 apr. 2025 kl 09:59 skrev Anthony D'Atri <anthony.da...@gmail.com>:
Filestore IIRC used partitions, with cute hex GPT types for various states and 
roles.  Udev activation was sometimes problematic, and LVM tags are more 
flexible and reliable than the prior approach.  There no doubt is more to it 
but that’s what I recall.
Filestore used to have softlinks towards the journal device (if used)
which pointed to sdX where that X of course would jump around if you
changed the number of drives on the box, or the kernel disk detection
order changed, breaking the OSD.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to