Hi Arne, You are correct, I didn't do a very good job of explaining the code in my blog post, I usually keep those short with more screen captures because I figure that not many people would actually take the time to read through it there. Also, I didn't really add many comments either but I did try to copy the present style of the code to try and make it match and be more consistent throughout, even if not perfect yet. I'm still testing out the change myself in my own home setup here to see if I run into any bad edge cases along the way.
I can always try to explain the different code parts as I am indeed modifying the core parts of the read and write operations for tun and tcp so it's a big change to make to the code base. Basically this change is important to me in particular because of my setup and requirements in specific. I have WiFi LAN clients which all assume a 1500 byte MTU on their side and I have a router WAN client which enforces a 1500 byte MTU on the internet's side. In the middle of my core network is a VPN box and almost every VPN software will operate in UDP mode with a sub-1500 MTU in the middle of this network pipeline. This is not a good design to have in general as I don't want to waste cycles fragmenting and/or compressing the data into smaller sized UDP packets. With the code change I am presenting, I am able to specify a true 1500 byte VPN MTU interface with the exact matching 1500 byte read calls to the TUN interface itself (the code base had to be modified to allow for this because it was adding onto the payload_size used in the read call which I didn't want as I am operating on exact multiples of 1500 bytes in specific). With this change, my network pipeline is a true 1500 byte MTU which matches all the way from the client side to the vpn link to the internet side and to the server side (end to end to end to end). In addition, I also added the ability to batch together multiple 1500 byte read calls (specifically 6 x 1500 bytes into 9000 bytes) into one single encrypt sign call and one single TCP write call. This allows the encryption method to operate only once on a much larger payload size as well as allow the linux kernel to efficiently transfer the data with order and delivery guaranteed as fast as possible. The code base had to be modified to allow for all of this as well as it was preventing me from performing this much larger sized ssl sign and encrypt + tcp read and write (the code base assumes you are operating on only 1 tun read call worth of data at a time everywhere). This is exactly why I prefer using TCP to tunnel encrypted network data as my solution provided can properly set a full sized 1500 byte MTU as well as perform an exact matching read call of 1500 bytes to get the full amount of data from the interface and then bulk it together to efficiently encrypt it and then use the magic of TCP to transfer that data all at once as quickly as possible without any need for fragmentation or compression. I don't think any other VPN product on the market offers that kind of functionality as far as I am aware as most other VPN products use a smaller sized MTU as well as the packet size limitations of UDP. I believe that this could be a distinguishing feature for OpenVPN as well as automatically solve some of the issues that folks run into when inserting a VPN appliance into the middle of their network setups. I've been running this change on my own setup to at least make sure it works and it seems to be running pretty nicely so far. I haven't experienced any fragmentation or performance issues as any sized data that comes off the clients LAN side is fully taken care of now through the VPN side and onto the WAN and server side. If this is something you are not interested in I can understand that, I can stop posting here and the most I can do is at least submit a pull request in case anyone in the future is indeed interested in such work. It'd be nice to contribute to a good quality open source project that I have used for many many years and something which may help solve other community member's issues with regards to the small sized MTU + UDP problem which does exist in practice and really hampers connections along the way in a network design. I also don't mind explaining my code parts if you actually want, I just need to take time to write them out and describe what they are doing and why. As you can see, I am trying to achieve a very specific and exact design goal that the code base wasn't originally allowing for, so I had to make some modifications to be able to accomplish it. Thanks, Jon C On Fri, Aug 8, 2025 at 5:53 AM Arne Schwabe <a...@rfc2549.org> wrote: > Am 07.08.25 um 20:29 schrieb Jon Chiappetta via Openvpn-devel: > > Thanks to Gert's help on this, I was able to finally configure and > > compile and run and test the bulk mode changes against the latest git > > source code to ensure everything still works correctly. > > > > I also fixed up some other issues like properly freeing the extra buffer > > allocations and removing the unneeded batched data prefixes and > > converting a remaining while loop to a max limited for loop and properly > > resetting the outgoing tun buffer pointer at the end of the write method > > when finished. > > > It would still good to explain what you are trying to achieve here and > what the idea behind the patch is to be able to review and understand > your patch. > > The patch itself basically has no comments at all, so it is very hard to > decipher for me from the patch what it is trying to to do. Eg there is a > variable flag_ciph that fiddles with encryption of packets. > > You are talking and describing this bulk mode as if it was obvious but > it is not. The description on your blog says: > > > [...] read 8192 bytes off of the client’s TCP sockets directly and > > proxy them in one write call over TCP directly to the VPN server > > without needing a tunnel interface with a small sized MTU which > > bottlenecks reads+writes to <1500 bytes per function call. > > It also not helping as you talking about TCP write/reads, where I can > see some improvement by cutting down the number of reads/writes. But the > second part then talks about not using a tunnel with a small sized MTU. > But if you use a larger sized TUN interface with a larger MTU, then you > already have larger reads/writes to the TCP socket. > > Also your speedtest showing 562 is meaningless without having any > comparison without your patch. > > Arne >
_______________________________________________ Openvpn-devel mailing list Openvpn-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/openvpn-devel