Hello

We did try to use Cephadm with Podman to start 44 OSDs per host which 
consistently stop after adding 24 OSDs per host.
We did look into the cephadm.log on the problematic host and saw that the 
command `cephadm ceph-volume lvm list --format json` did stuck.
We were the output of the command wasn't complete. Therefore, we tried to use 
compacted JSON and we could increase the number to 36 OSDs per host.

If you need more information just ask.


Podman version: 3.2.1
Ceph version: 16.2.4
OS version: Suse Leap 15.3

Greetings,
Jan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to