On 2023-08-11, Marko Cupać <marko.cu...@mimar.rs> wrote:
> Hi,
>
> I have star topology network where dozens of spokes communicate with
> other spokes through central hub over GRE tunnels protected with
> transport-mode ipsec.
>
> This worked great for years, but lately all the locations got bandwidth
> upgrade (spokes: 10Mbit -> 50Mbit, hub: 2x200Mbit -> 2x500Mbit), and I'm
> starting to experience problems.
>
> Spokes have APU4D4s, and my tests show they can push up to 30Mbit/s of
> ipsec bidirectionally. Hub has HPE DL360g9 with Xeon CPU E5-2623 v4 @
> 2.60GHz and bge NICs, and it seems it can push no more than 200Mbit/s
> of ipsec bidirectionally (I have no chance to test this thoroughly in a
> lab, but what I see in production indicate this strongly).

If possible, I suggest putting a fast client machine (laptop or
server) on local network near the server and doing some tests that way.

If you post your IPsec configuration, perhaps someone can suggest
whether the choice of ciphers etc could be improved. It can make quite a
difference.

> Are there any commands I can run which would indicate ipsec traffic is
> being throttled due to hardware being underspecced? top shows CPU is
> more than 50% idle. netstat shows ~10000 Ierrs / Ifail (no Oerrs /
> Ifail) on interfaces that deal with ipsec for two months worth of
> uptime.
>
> Would replacing Xeon box with AMD EPYC 7262 likely result in an
> improvement? Should I go for some NICs other than bge? What hardware do
> I need at Hub location to accomodate ~400Mbit/s of ipsec
> bidirectionally?

I doubt the NIC choice will be hugely important, in terms of overall
network traffic pretty much anything should be able to cope with the
available bandwidth.

EPYC is certainly a bunch faster than the 2016 Xeon, but the 'Jaguar'
AMDs in the APUs are going to be the slowest point overall.

You also don't mention which OpenBSD versions you're using. That could
make quite a difference.

-- 
Please keep replies on the mailing list.

Reply via email to