Anthony,
I've amended my document regarding netdev_max_backlog - too big can be
as bad as too small.
Regarding tcp_max_syn_backlog - the kernel documentation says this will
increase automatically with load. I wouldn't touch it unless perhaps -
in an environment like Ceph - I would set it to
I’m pretty sure I’ve seen that happen with QFX5100 switches and
net.core.netdev_max_backlog=25
net.ipv4.tcp_max_syn_backlog=10
net.ipv4.tcp_max_tw_buckets=200
> On May 29, 2020, at 10:53 AM, Dave Hall wrote:
>
> I agree with Paul 100%. Going further - there are many more 'knobs
I agree with Paul 100%. Going further - there are many more 'knobs to
turn' than just Jumbo Frames, which makes the problem even harder.
Changing any one setting may just move the bottleneck, or possibly
introduce instabilities. In the worst case, one might tune their Linux
system so well th
Please do not apply any optimization without benchmarking *before* and
*after* in a somewhat realistic scenario.
No, iperf is likely not a realistic setup because it will usually be
limited by available network bandwidth which is (should) rarely be maxed
out on your actual Ceph setup.
Paul
--
P
Hello.
A few days ago I offered to share the notes I've compiled on network
tuning. Right now it's a Google Doc:
https://docs.google.com/document/d/1nB5fzIeSgQF0ti_WN-tXhXAlDh8_f8XF9GhU7J1l00g/edit?usp=sharing
I've set it up to allow comments and I'd be glad for questions and
feedback. If
Hi,
I am using ceph Nautilus in Ubuntu 18.04 working fine wit MTU size 1500
(default) recently i tried to update MTU size to 9000.
After setting Jumbo frame running ceph -s is timing out.
regards
Amudhan P
___
ceph-users mailing list -- ceph-users@ceph.