Hi Boris,
The workaround we used to replace disks affected by this bug is to force
devices in the osd creation using the orchestrator without yaml specs.
VG and LV are first created manually and then we used something like this :
|ceph orch daemon add osd
<host>:data_devices=/dev/data01,db_devices=/dev/fast01/db01|
See the advanced OSD creation :
https://docs.ceph.com/en/reef/cephadm/services/osd/#creating-new-osds
Adrien
Le 24/02/2026 à 14:57, Boris via ceph-users a écrit :
Thanks Marek, but this did not work here, because the SSDs still have other
block.db LVs on it.
Am Di., 24. Feb. 2026 um 14:46 Uhr schrieb Marek Szuba via ceph-users <
[email protected]>:
On 2026-02-24 11:15, Robert Sander via ceph-users wrote:
The orchestrator in Ceph 19 and 20 has a bug with hybrid OSDs:
https://tracker.ceph.com/issues/72696
There is a patch available but it has not been merged yet.
Meanwhile, a workaround that has worked for us is:
* tell the orchestrator to delete unwanted non-hybrid OSDs _without
zapping the drives_;
* manually wipe both underlying devices, with as little time in
between the two as possible;
* explicitly tell the orchestrator to refresh its device list, so that
it sees both available drives at the same time.
--
MS
_______________________________________________
ceph-users mailing list [email protected]
To unsubscribe send an email [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]