Thanks, it was a straight through cat6. It works (with the reported occasional link drops) at autoneg, but if I specify 1g it fails to pass traffic

On August 24, 2022 8:31:41 p.m. Tomoaki AOKI <junch...@dec.sakura.ne.jp> wrote:

On Wed, 24 Aug 2022 19:37:43 -0400
mike tancsa <m...@sentex.net> wrote:

On 8/24/2022 7:22 PM, Mike Jakubik wrote:
> What kind of HW are you running on? Im assuming some sort of fairly
> modern x86 CPU with at least 4 cores.. Is it multiple CPUs with Numa
> nodes perhaps? In any case, if you are testing with iperf3, try using
> cpuset on iperf3 to bind it to specific cores. I had a performance
> issue on a modern Epyc server with a Mellanox 25Gb card. It turns out
> the issue was with the scheduler and how it was bouncing the processes
> around diff cores/CPU caches. See "Poor performance with stable/13 and
> Mellanox ConnectX-6 (mlx5)" on the freebsd-net mailing list for details.
>
> P.S. I also use a number of igc (Intel i225 @ 2.5Gb) cards at home and
> have had no issues with them.
>
>
Hi,

     Performance is excellent. Its just the random link drops thats at
issue.  With default settings, running iperf3 on back to back NICs via
xover takes a good 20-45min before the link drop. If anything, I am
surprised at how much traffic these small devices can forward.  IPSEC
especially is super fast on RELENG_13. The link drops seem to be always
on the sender.  With fc disabled, reducing the link speed to 1G seems to
make the issue go away, or at least its not happening in overnight
testing. Its a Celeron N5105. https ://
www.aliexpress.com/item/1005003990581434.html

Also, if you hook a couple back to back via xover cable, are you able to
manually set the speed to 1G and pass traffic ? It doesnt work for me.

     ---Mike

FYI:
 https://en.wikipedia.org/wiki/Medium-dependent_interface

Maybe you should use straight-through cable for 1G or faster.


--
Tomoaki AOKI    <junch...@dec.sakura.ne.jp>




Reply via email to