Thank you Jordan for looking into this! I ran the test with tcpbench for 2min (pf+isakmpd+lacp) and the results were pretty much the same, a bit better: Peak Mbps: 3588.227 Avg Mbps: 3533.040
Cannot really run all 4 tests right away as this is a production node and I currently do not have any alternative HW to set it up just yet, but I am planning on doing the full round of testing with different server hardware in about a months time when I get the hardware. Also to note is that when I had similar load in a real-life situation, (some database witchcraft uses as much bandwidth as it can get) then when the production node reached ~3.5Gbps, there really was no bandwidth left to serve the essential services like relayd checks. This is what initially led me to this rabbithole of testing the bandwidth and trying to figure out the bottleneck. :) Best regards, Kalle On Wed, May 6, 2020 at 10:41 AM Jordan Geoghegan <jor...@geoghegan.ca> wrote: > > > On 2020-05-04 06:42, Kalle Kadakas wrote: > > Greetings OpenBSD community, > > > > I am running into severe bandwidth limitations whilst passing traffic > > through an OpenBSD firewall. > > The NIC in use is an Intel 10Gb 2-port X520 adapter from which I would > > hope to pass through at least 7Gbps+, yet the best results I have > > gotten is only around 3.5Gbps. > > > > The results of bandwidth measurements (iperf for 30sec... > > > > As has been discussed on misc previously, iperf is not suitable for > benchmarking networking throughput on OpenBSD. It ends up just > benchmarking the gettimeofday syscall (something that is cheap on Linux, > but relatively expensive on OpenBSD I'm told). For best results, use > tcpbench for your OpenBSD networking benchmarks. > > >