Adding freebsd-net in case they can provide some feedback or tips about how to debug this.
On Tue, Jun 18, 2019 at 04:03:00PM +0200, Christian M wrote: > I've noticed very slow networking speed between VM's with FreeBSD on the > same host (XCP-ng 7.6.0) for more recent FreeBSD-versions. Sadly inter-VM throughput has always been a problem for FreeBSD/Xen VMs. I'm not a network expert, so take my comments below with a pinch of salt. > > I've made some tests that show me that something happened from 10.4-RELEASE > to 11.0-RELEASE that had a huge impact on network performance, and > 12.0-RELEASE is even slower. I don't think there have been any major changes, the main one would be the re-write to add multiqueue support to netfront, but that change should have left the code more or less as it was, apart from adding multiqueue. > > My test setup: > > Host: XCP-ng 7.6.0, managed with XenOrchestra. Open source. > > Network: Internal Private Network on the host (not connected to a PIF). > Each VM has only one VIF connected to this network. > > VM's: > > 2 x 12.0-RELEASE > > 2 x 11.0-RELEASE > > 2 x 10.4-RELEASE > > > All clean identical installs from XenOrchestra, only installed iperf on > each VM for testing. (xe-guest-utilities makes no difference in my tests, > I've tried with and without). I think xe-guest-utilities is just needed in order to report suspend/resume capability to XCP, but there isn't anything specially helpful in there. > > iperf -s on first server listed below, and iperf -c <ip> -r on the second > to test speed back and forth: > > > 12.0 <-> 12.0: 50Mbit as client and server > > 12.0 <-> 11.0: 800Mbit/s (11.0 as client), and 140Mbit/s (11.0 as server) > > 12.0 <-> 10.4: 2.76Gbit (10.4 as client), and 1.25Gbit (10.4 as server). > > 11.0 <-> 11.0: 219Mbit as client, 99Mbit as server > > 10.4 <-> 10.4: 11.2Gbit as client, 10.9Gbit as server Do you see the same issues with external connections? Have you tested throughput between two FreeBSD 12.0 VM running on different hosts? > > > As I side note, not sure if related, but I've noticed that I can't run > iperf with -r flag on 10.4-RELEASE. I get this error message: > > > iperf -c 172.31.16.122 -r > > ------------------------------------------------------------ > > Server listening on TCP port 5001 > > TCP window size: 64.0 KByte (default) > > ------------------------------------------------------------ > > write failed: Broken pipe > > ------------------------------------------------------------ > > Client connecting to 172.31.16.122, TCP port 5001 > > TCP window size: 32.5 KByte (default) > > ------------------------------------------------------------ > > [ 5] local 172.31.16.121 port 19231 connected with 172.31.16.122 port 5001 > > [ ID] Interval Transfer Bandwidth > > [ 5] 0.0- 0.0 sec 0.00 Bytes 0.00 bits/sec Hm, OK that's weird, I don't think however it's related to Xen. Have you tried if the same happens on a bare-metal install of FreeBSD? Or when running on a different hypervisor? > > > > I can run iperf -s fine, and iperf -c <ip> from the other 10.4 VM though: > > > > iperf -c 172.31.16.122 > > ------------------------------------------------------------ > > Client connecting to 172.31.16.122, TCP port 5001 > > TCP window size: 32.5 KByte (default) > > ------------------------------------------------------------ > > [ 3] local 172.31.16.121 port 22055 connected with 172.31.16.122 port 5001 > > [ ID] Interval Transfer Bandwidth > > [ 3] 0.0-10.0 sec 12.9 GBytes 11.1 Gbits/sec > > > > What have I tried to solve this? > > I've tried to disable checksum offloading for the 12.0-RELEASE VIF's via > XCP-ng. Disabled basically everything without any difference in iperf > results: other-config (MRW): ethtool-sg: off; ethtool-tso: off; > ethtool-ufo: off; ethtool-gso: off; ethtool-rx: off; ethtool-tx: off > > > Also tried disabling offloading in FreeBSD with ifconfig xn0 -txcsum > -rxcsum -tso -lro and no difference here either. Hm, disabling offloading would be my first suggestion, but you seem to have already done that. > > Any ideas of how to proceed now to find a solution for this? Maybe you can try to run wireshark/tcpdump or some other similar software in order to try to detect if there are errors on the transmitted packets? You could run the sniffer on the host and attach it to the backend interfaces (vifX.X) or the bridge if you are using bridged networking. The 12.0 <-> 12.0 case seems quite bad, so I would start with that one. Roger. _______________________________________________ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"