Il 17.08.21 16:34, Marc ha scritto:

ceph-volume lvm zap --destroy /dev/sdb
ceph-volume lvm create --data /dev/sdb --dmcrypt

systemctl enable ceph-osd@0


Hi Marc,

it worked! Thank you very much!

I have some question:

1. ceph-volume already enable and run ceph-osd, so I'm not required to run systemctl enable , is this correct?

2. I know there is also the possibility to define two different partitions on OSD for journal and data, properly sizing the journal partition taking account of device throughput; in our case how is sized the journal? It's on the same partition (I suppose inspecting the device with lsblk)? There is a significant performance gain manually sizing the journal and putting it on different device partition?

3. As I read on RedHat doc for production clusters they suggest to introduce new OSD with a prepare/activate instead of a one-shot OSD creation "avoiding large amounts of data being rebalanced"; in your opinion it's possible a gradual OSD integration on a running cluster of several OSD taking a look at when the rebalancing operation end?

Thank you very much again!

Francesco

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to