Many thanks, Luigi! We are measuring the network performance in VM(Hyper-V), 
using netvsc virtual NIC device and its own driver. The Linux VM also uses the 
similar virtual device. The driver on both Linux and FreeBSD have TSO/LRO 
support. With just one network queue, we found the throughput is higher on 
Linux (around 2.5 - 3 Gbps) than FreeBSD (just around 1.6 Gbps) with 10GB NIC. 
If INVARIANT option is disabled, FreeBSD can achieve 2 - 2.3 Gbps. The much 
higher interrupt rate on FreeBSD was observed. 

Thanks for the all suggestions. Do you think netmap could help in this case?

Wei


From: rizzo.un...@gmail.com [mailto:rizzo.un...@gmail.com] On Behalf Of Luigi 
Rizzo

> Hi,
>
> I am working on network driver performance for Hyper-V. I noticed the network 
> interrupt rate on FreeBSD is significantly higher than Linux, in the same 
> Hyper-V environment. The iperf test also shows the FreeBSD performance is not 
> as good as Linux. Linux has NAPI built in which could avoid a lot of 
> interrupts on a heavy loaded system. I am wondering if FreeBSD also support 
> NAP in its network stack?
>
> Also any thought on the network performance in general?

i suppose you refer to network performance in a VM, since the factors that 
impact performance there are different from those on bare metal.
The behaviour of course depends a lot on the NIC and backend that you are using 
so if you could be more specific (e1000 ? virtio ?), that would help.

please talk to me (even privately if you prefer) because we have done a lot of 
work on enhancing performance in a VM which covers qemu, xen, bhyve and surely 
is applicable to HyperV as well. And while the use of netmap/VALE gives up to a 
5-10x performance boost, there is another factor of 2-5 that can be gained even 
without netmap. Details at info.iet.unipi.it/~luigi/research.html

On the specific NAPI request:
we do not have NAPI but in some NIC drivers the interrupt service routine will 
spin until it is out of work, which will contribute reducing load.
We often rely on interrupt moderation on the NIC to reduce interrupt rates and 
give work to the ISR in batches. Unfortunately, moderation is often not 
emulated in hypervisors (e.g. we pushed it into qemu a couple of years ago for 
the e1000).

An alternative mechanism (supported on some of our network drivers, and trivial 
to add on others) os "device polling", which i introduced some 15 years ago and 
finds a new meaning in a VM world because it removes device interrupts and 
polls the NIC on timer interrupts instead.
This circumvents the lack of interrupt moderation and gives surprisingly good 
results. The caveat is that you need a reasonably high HZ value to avoid 
excessive latency, and the default HZ=1000 is sometimes turned town to 100 in a 
VM. You should probably override that.

Depending on the performance tests you run, there might be other things that 
cause performance differences, such as support of TSO/LRO offloading on the 
backend (usually with virtio or whatever is your backend), which lets the guest 
VM ship large 64k segments through the software switch.

cheers
luigi


_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to