Hi Samuli,

Yes, that was the document that I read before going down this path.  I
did a bunch of testing and found the optimum MTU range in my setup is
47500 to 52500, and 50000 is probably as close to the peak as necessary.

When I have 2 nodes, nothing between them and not routing to elsewhere,
the extra bandwidth from the large MTU gives me 102% or so of wire speed
throughput when I transfer incompressible data (derived from
/dev/urandom).  I can do this very reproducibly and predictably.  For
data-compressibility tests, I used two different forms, one was highly
compressible data (derived from /dev/zero) and the other was the tarball
(not gzipped, just tar) of linux kernel source code.  With the file
created from uncompressed linux kernel source, where lzo can do some
work, I can regularly get 140-150% wire speed, and with
highly-compressible data I get just over 400%.  Meaning, I can transfer
all-nulls over a 1Gbps link as if it was a 4Gbps link, and text as if it
was a 1.5Gbps link.  This is good stuff for the bandwidth-constrained!

So really the only problem is getting that data back out of ovpn and
onto a physical hardware interface to another place.  OVPN is absolutely
awesome, otherwise.

The things I have already tried to mess around with to affect performance:
- Toggle comp-lzo on/off.  No effect.
- Change encryption algorithm, including change to "none".  No effect.
- Change sndbuf/rcvbuf socket buffers to 425984(basically doubling
defaults).  No effect.
- Change tun-mtu.  Any changes too far off of my existing 50000 cause
perf degradation.
- Change txqueuelen on all interfaces to 1000000.  Slight effect, maybe
5-10% gains or so. (Possibly trimmed 1-2s off a ~20s transfer)
- Turning off checksumming on all interfaces: ethtool -K {dev} rx off tx
off   No effect.
- No iptables running at all on anything.

Again, keep in mind that full wire speed is realized with no vpn, and
with IPSec. 
So, to reason this through a little:
- No vpn = fast.
- IPSec = fast.
- OVPN->OVPN = even faster
- OVPN->OVPN->routing/hardware = slow, down to about 200Mbps from 1+Gbps.

I'm wondering if the bottleneck is in the tun/tap driver's ability to
copy data out to the hardware device or to the buffers to be processed
for routing? 
BTW, the host OS is Ubuntu server 16.04.3 on all of these, kernel
4.4.0-104-generic and the TUN is built-in to the kernel, not a module. 
I have not experimented with other kernel versions.

Thanks for any thoughts,
Tom


On 01/05/2018 01:08 AM, Samuli Seppänen wrote:
> This does not answer your question, either, but there are more details
> on OpenVPN performance optimization here:
>
> <https://community.openvpn.net/openvpn/wiki/Gigabit_Networks_Linux>
>
> Increasing the MTU helps as it reduces the number of user<->kernel-space
> switches.
>




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Openvpn-devel mailing list
Openvpn-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-devel

Reply via email to