> I would treat having a separate cluster network
> at all as a serious cluster design bug.

I wouldn’t go quite that far, there are still situations where it can be the 
right thing to do.  Like if one is stuck with only 1GE or 10GE networking, but 
NICs and switch ports abound.  Then having separate nets, each with bonded 
links, can make sense.

I’ve also seen network scenarios where bonding isn’t feasible, say a very large 
cluster where the TORs aren’t redundant.  In such a case, one might reason that 
decreasing osd_max_markdown_count can improve the impact of flapping, and the 
impact of the described flapping might be amortized toward the noise floor.

When bonding, always always talk to your networking folks about the right 
xmit_hash_policy for your deployment.  Suboptimal values rob people of 
bandwidth all the time.

> Reason: a single faulty NIC or
> cable or switch port on the backend network can bring down the whole
> cluster. This is even documented:
> 
> https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd/#flapping-osds

I love it when people reference stuff I suffer then write about :D. I haven’t 
seen it bring down as such, but it does impact and can be tricky to 
troubleshoot if you aren’t looking for it.  The clusters I wrote about there 
FWIW did have bonded private and public networks, but weren’t very large by 
modern standards.

> 
> On Thu, Oct 10, 2024 at 3:23 PM Phong Tran Thanh <tranphong...@gmail.com> 
> wrote:
>> 
>> Hi ceph users
>> 
>> I have a 100G network card with dual ports for a Ceph node with NVMe disks.
>> Should I bond them or not? Should I bond 200G for both the public and
>> cluster networks, or separate it: one port for the public network and one
>> for the cluster?
>> 
>> Thank ceph users
>> --
>> Email: tranphong...@gmail.com
>> Skype: tranphong079
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> 
> -- 
> Alexander Patrakov
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to