https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=280386
Cheng Cui <c...@freebsd.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |c...@freebsd.org --- Comment #25 from Cheng Cui <c...@freebsd.org> --- (In reply to Kevin Bowling from comment #22) (In reply to pascal.guitierrez from comment #23) I had a similar experience last year when I was debugging a ENOBUFS error returned to TCP on using bce NICs. But I am not sure if you can find a similar solution. <snap from 2023> Turns out the root cause is the default NIC send queue length is too small. The enobufs error came from the _IF_QFULL check in ifq.h. However, tuning "sysctl net.link.ifqmaxlen" directly does not work. There is a per NIC interface setup in the driver to setup device tx/rx queues. I have to increase the tx queue "ifq_maxlen" from the device sysctl "hw.bce.tx_pages". After tuning that, I can achieve a stable 1Gbps x 100ms delay BDP. </snap from 2023> Talking about review D4295, it reminds me of `Linux has some work like TCP small queue at the sender side.` Talking about workaround, you may also test the following two patches I prepared in stable/14 branch as a workaround of improving TCP performance in congestion control. https://reviews.freebsd.org/D47218 << apply this patch firstly https://reviews.freebsd.org/D47213 << apply this patch secondly -- You are receiving this mail because: You are the assignee for the bug.