Hi,

I have star topology network where dozens of spokes communicate with
other spokes through central hub over GRE tunnels protected with
transport-mode ipsec.

This worked great for years, but lately all the locations got bandwidth
upgrade (spokes: 10Mbit -> 50Mbit, hub: 2x200Mbit -> 2x500Mbit), and I'm
starting to experience problems.

Spokes have APU4D4s, and my tests show they can push up to 30Mbit/s of
ipsec bidirectionally. Hub has HPE DL360g9 with Xeon CPU E5-2623 v4 @
2.60GHz and bge NICs, and it seems it can push no more than 200Mbit/s
of ipsec bidirectionally (I have no chance to test this thoroughly in a
lab, but what I see in production indicate this strongly).

Are there any commands I can run which would indicate ipsec traffic is
being throttled due to hardware being underspecced? top shows CPU is
more than 50% idle. netstat shows ~10000 Ierrs / Ifail (no Oerrs /
Ifail) on interfaces that deal with ipsec for two months worth of
uptime.

Would replacing Xeon box with AMD EPYC 7262 likely result in an
improvement? Should I go for some NICs other than bge? What hardware do
I need at Hub location to accomodate ~400Mbit/s of ipsec
bidirectionally?

Thank you in advance,


-- 
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/

Reply via email to