Hi

Our hosts have 3 NVMEs and 48 spinning drives each.
We found that ceph orch made the default lvm size for the block_db 1/3 the
total size of the NVMEs.
I suspect that ceph only considered one of the NVMEs when determining the
size, based the closely related issue; https://tracker.ceph.com/issues/54541

We started having some bluefs spillover events now, so I'm looking for a
way to fix this.

Best idea I have so far is to manually specify the "block_db_size" in the
osd_spec, then just recreating the entire block_db. Though I'm not sure if
that means we'll hit the same issue  https://tracker.ceph.com/issues/54541
instead.
There would also be a lot of data to move in order to do this to a total of
588 OSD's. Maybe there is a way to just maybe remove and re-add (bigger)
block_db?

I would appreciate any suggestions or tips.

Best regards, Mikael
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to