On Mon, Aug 14, 2023 at 02:07:12AM +0000, Jason Tubnor wrote:
>> Hi,


> Not sure how this can happen. Have you destroyed and recreated the interface
> in between? Can you easily reproduce this?

No I didn't it just seem to drop. It happened twice yesterday but I have, even 
under continous load, not been able to get it to error again.

> I have added a bit more info to the error message, it now also prints the
> iface id and the errno. It would be useful if you can reproduce it with those.

Thanks and patched across all 3 machines.

Below are metrics/results from today's testing:

Spoke A:
--------
CPU0 states:  1.0% user,  0.0% nice, 27.5% sys,  5.9% spin,  2.9% intr, 62.7% 
idle
CPU1 states:  0.0% user,  0.0% nice, 26.5% sys,  4.1% spin,  0.0% intr, 69.4% 
idle
  PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
45123 root       2    0 1780K 2864K sleep/0   kqread    8:39 20.75% iperf3
 |
 |
\|/
Hub:
----
CPU0 states:  0.0% user,  0.0% nice, 37.4% sys, 16.2% spin,  1.0% intr, 45.5% 
idle
CPU1 states:  0.0% user,  0.0% nice, 40.2% sys, 10.8% spin,  0.0% intr, 49.0% 
idle
 |
 |
\|/
Spoke B:
--------
CPU0 states:  0.0% user,  0.0% nice, 23.0% sys,  1.0% spin,  6.0% intr, 70.0% 
idle
CPU1 states:  0.0% user,  0.0% nice, 30.4% sys,  2.0% spin,  0.0% intr, 67.6% 
idle
  PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
32297 _iperf3    2    0 1776K 2688K sleep/0   kqread    4:24 10.55% iperf3

Performance of sending Spoke A through to Spoke B via Hub (iperf3 -t 3600):

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-3600.00 sec   117 GBytes   280 Mbits/sec   85             sender
[  5]   0.00-3600.01 sec   117 GBytes   280 Mbits/sec                  receiver

Performance of sending Spoke A to Hub (iperf3 -t 300):

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-300.00 sec  37.9 GBytes  1.08 Gbits/sec   17             sender
[  5]   0.00-300.00 sec  37.9 GBytes  1.08 Gbits/sec                  receiver

Hub load while under iperf3 test with more throughput above:

CPU0 states:  0.0% user,  0.0% nice, 47.0% sys,  6.0% spin,  1.0% intr, 46.0% 
idle
CPU1 states:  0.0% user,  0.0% nice, 46.6% sys,  4.9% spin,  0.0% intr, 48.5% 
idle
Memory: Real: 23M/273M act/tot Free: 176M Cache: 89M Swap: 22M/256M

  PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
 1205 _iperf3    2    0 1016K 1736K sleep/1   kqread    0:05 11.43% iperf3

----

Is there a way to improve the forwarding speed at the hub? Using the same IKEv2 
configuration with a VXLAN/VEB/VPORT configuration, I can get ~2.5x the 
performance from spoke to spoke. Spoke to Hub performance is on par.

Cheers

Reply via email to