Hi and welcome to ceph,

On 23.06.25 22:37, Ryan Sleeth wrote:
I am setting up my first cluster of 9-nodes each with 8x 20T HDDs and 2x 2T
NVMes. I plan to partition the NVMes into 5x 300G so that one partition can
be used by cephfs_metadata (SSD only), while the other 4x partitions will
be paired as db devices for 4x of the HDDs. The cluster will only be used
for cephfs and data will only be stored on its EC 4+2 HDD-only pool. Just a
simple and large file server, so performance isn't a primary concern. Each
node has 2x 10Gb network connections (one public, one cluster). All disks
encrypted (encrypted=true on OSD creation on top of LVM).


I would skip the cluster network and use a bond with both interfaces as public network. The benefits of a separate cluster network are rather small, especially in your use case. I prefer to have a reliable network connection to clients (given switches that support stacking and/or MLAG).


Best regards ,

Burkhard

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to