I am happy run the tests with another cable (although the one I was using is brand new). I still receive 940Mbit/s with FreeBSD 12.1 with the exact same setup.
The only(!) difference is the physical mSATA SSD (one for OpenBSD, the other for FreeBSD). They have the identical specs though. Results with a different cable: OpenBSD apu.liv.io 6.6 GENERIC.MP#4 amd64 apu# iperf3 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.10.1.240, port 64453 [ 5] local 10.10.1.241 port 5201 connected to 10.10.1.240 port 64454 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 16.8 MBytes 141 Mbits/sec [ 5] 1.00-2.00 sec 17.7 MBytes 149 Mbits/sec [ 5] 2.00-3.00 sec 17.5 MBytes 147 Mbits/sec [ 5] 3.00-4.00 sec 17.7 MBytes 148 Mbits/sec [ 5] 4.00-5.00 sec 17.3 MBytes 146 Mbits/sec On 1/30/2020 5:04 PM, Peter J. Philipp wrote: > On Thu, Jan 30, 2020 at 04:50:59PM +0100, livio wrote: >> Hi Peter, >> Thanks for your reply. I would already be quite happy with ~500Mbit/s. >> My test do not involve a switch, just a notebook and the APU through >> a Cat.6a cable. I achieve 940Mbit/s with the exact same setup but >> FreeBSD 12.1 on the APU. >> >> I am happy to change parameters, provide additional logs and run any >> number of tests. I am currently out of ideas. > I go from my APU this way: > > cat5e-->switch (netgear)-->cat5e-->switch (netgear)-->cat6a-->Xeon workstation > > Where the path between the first switch and the Xeon is all 10 GbE but this > shouldn't matter. I'd bet on that you have a bad cable. It's happened to me > before. > > Regards, > -peter > >> $ uname -a >> OpenBSD apu.liv.io 6.6 GENERIC.MP#4 amd64 > I have the same version on my APU and -current on the Xeon. > > >> $ ifconfig em0 >> em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 >> ???????? lladdr 00:0d:b9:41:70:20 >> ???????? index 1 priority 0 llprio 3 >> ???????? media: Ethernet 1000baseT full-duplex (1000baseT >> full-duplex,master,rxpause,txpause) >> ???????? status: active >> ???????? inet 10.10.1.241 netmask 0xffffff00 broadcast 10.10.1.255 >> >> Thank you, >> Livio >> >> On 1/30/2020 4:38 PM, Peter J. Philipp wrote: >>> On Thu, Jan 30, 2020 at 03:43:41PM +0100, livio wrote: >>>> Dear all, >>>> >>>> I am unable to achieve decent throughput with a 1 GigE interface >>>> (Intel I210) on OpenBSD 6.6. When running iperf3 I get around 145Mbit/s. >>>> >>>> The config/setup is: APU2c4, Win10 notebook, no switch, Cat.6a cable, >>>> MTU 1500, 1000baseT, full-duplex, pf disabled, BSD.mp, no custom Kernel >>>> parameters/optimizations. >>>> >>>> With an increased MTU of 9000 (on both devices) the throughput is around >>>> 230-250Mbit/s. >>>> >>>> When running the same test with a FreeBSD 12.1 on the APU I achieve >>>> around 940Mbit/s (MTU 1500). >>>> >>>> The BIOS has been updated to the latest version (v4.11.0.2). The >>>> hardware of the device is: https://pcengines.ch/apu2c0.htm >>>> >>>> dmesg output: >>>> https://paste.ee/p/OeRbI >>>> >>>> Any inputs and help is highly appreciated. >>>> >>>> Many thanks, >>>> Livio >>>> >>>> PS: I ran the same tests on an APU1c4 with Realtek RTL8111E interfaces. >>>> The results were lower - around 95Mbit/s. >>>> https://pcengines.ch/apu1c4.htm >>> Hi, >>> >>> Without any tuning arguments I get: >>> >>> chi# iperf -c beta.internal.centroid.eu >>> ------------------------------------------------------------ >>> Client connecting to beta.internal.centroid.eu, TCP port 5001 >>> TCP window size: 17.0 KByte (default) >>> ------------------------------------------------------------ >>> [ 3] local 192.168.177.40 port 13242 connected with 192.168.177.2 port 5001 >>> [ ID] Interval Transfer Bandwidth >>> [ 3] 0.0-10.0 sec 536 MBytes 449 Mbits/sec >>> >>> ... on an APU1C4, could it be you have a slow switch or router? Any other >>> hardware that could slow yours down? >>> >>> I'm happy with this result, the APU1 is not really a powerhorse. >>> >>> Regards, >>> >>> -peter >>> >>>> PPS: Others also seem to have low throughput. None of the tuning >>>> recommendations I found online substantially improved my results: >>>> https://www.reddit.com/r/openbsd/comments/cg9vhq/poor_network_performance_pcengines_apu4/ >>>>