Thanks a lot Adrien. This worked for us. We might wipe and recreate the
OSDs when the fix is backported.

~# lvcreate -L 512g -n manual-db-1 ceph-60bd102a-b11a-44cc-9c28-106e14603b88
~# ceph orch daemon add osd
s3db23:data_devices=/dev/sdk,db_devices=/dev/ceph-60bd102a-b11a-44cc-9c28-106e14603b88/manual-db-1,encrypted=true
lvm

Am Di., 24. Feb. 2026 um 15:39 Uhr schrieb Adrien Georget <
[email protected]>:

> Hi Boris,
>
> The workaround we used to replace disks affected by this bug is to force
> devices in the osd creation using the orchestrator without yaml specs.
> VG and LV are first created manually and then we used something like this
> :
>
> ceph orch daemon add osd 
> <host>:data_devices=/dev/data01,db_devices=/dev/fast01/db01
>
>
> See the advanced OSD creation :
> https://docs.ceph.com/en/reef/cephadm/services/osd/#creating-new-osds
>
> Adrien
> Le 24/02/2026 à 14:57, Boris via ceph-users a écrit :
>
> Thanks Marek, but this did not work here, because the SSDs still have other
> block.db LVs on it.
>
>
> Am Di., 24. Feb. 2026 um 14:46 Uhr schrieb Marek Szuba via ceph-users 
> <[email protected]>:
>
>
> On 2026-02-24 11:15, Robert Sander via ceph-users wrote:
>
>
> The orchestrator in Ceph 19 and 20 has a bug with hybrid OSDs:
> https://tracker.ceph.com/issues/72696
>
> There is a patch available but it has not been merged yet.
>
> Meanwhile, a workaround that has worked for us is:
>   * tell the orchestrator to delete unwanted non-hybrid OSDs _without
> zapping the drives_;
>   * manually wipe both underlying devices, with as little time in
> between the two as possible;
>   * explicitly tell the orchestrator to refresh its device list, so that
> it sees both available drives at the same time.
>
> --
> MS
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
>
>

-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to