Hi,

Am 04.11.25 um 2:01 PM schrieb Mikael Öhman:
My hosts have 42 HDDs, sharing 3 NVMEs for DB/WAL partitions (14 OSDs per
NVME).
It's all using ceph orch, containerized setup, using LVMs, so it's probably
the most conventional HDD based setup one can do.

I have the basic osd spec:
---
placement:
  host_pattern: "mimer-osd02"
service_id: osd_spec
service_type: osd
spec:
  data_devices:
    rotational: 1
  db_devices:
    rotational: 0

But unless the NVME is completely empty orch will just never pick it up
(which I of course don't want to do, as it brings down 13 other OSDs).
Instead orch just flat out ignores the requirement that db_devices must go
on rotational: 0 and incorrectly suggests the broken setup:


Ceph 19 has this bug:

https://tracker.ceph.com/issues/72696

Regards
--
Robert Sander
Linux Consultant

Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: +49 30 405051 - 0
Fax: +49 30 405051 - 19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to