Hi, i agree to configure both interfaces as a bond. from my experience, i see the following advantages for a separate public and cluster network on the bond:
the isolation of public network and cluster network traffic makes it easier to monitor client traffic and inter osd traffic. and if it is necessary later, you can also prioritise or limit client traffic via the separate interface. It's also helpful to debug and analyse issues in the ceph cluster. Regards, Joachim joachim.kraftma...@clyso.com www.clyso.com Hohenzollernstr. 27, 80801 Munich Utting | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE275430677 Am Di., 24. Juni 2025 um 10:25 Uhr schrieb Burkhard Linke < burkhard.li...@computational.bio.uni-giessen.de>: > Hi and welcome to ceph, > > On 23.06.25 22:37, Ryan Sleeth wrote: > > I am setting up my first cluster of 9-nodes each with 8x 20T HDDs and 2x > 2T > > NVMes. I plan to partition the NVMes into 5x 300G so that one partition > can > > be used by cephfs_metadata (SSD only), while the other 4x partitions will > > be paired as db devices for 4x of the HDDs. The cluster will only be used > > for cephfs and data will only be stored on its EC 4+2 HDD-only pool. > Just a > > simple and large file server, so performance isn't a primary concern. > Each > > node has 2x 10Gb network connections (one public, one cluster). All disks > > encrypted (encrypted=true on OSD creation on top of LVM). > > > I would skip the cluster network and use a bond with both interfaces as > public network. The benefits of a separate cluster network are rather > small, especially in your use case. I prefer to have a reliable network > connection to clients (given switches that support stacking and/or MLAG). > > > Best regards , > > Burkhard > > _______________________________________________ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io