On 05/28/12 12:12, Luigi Rizzo wrote:
I am doing some experiments with implementing a software bridge
between virtual machines, using netmap as the communication API.

I have a first prototype up and running and it is quite fast (10 Mpps
with 60-byte frames, 4 Mpps with 1500 byte frames, compared to the
~500-800Kpps @60 bytes that you get with the tap interface used by
openvswitch or the native linux bridging).

That is awesome!

   - and of course, using PCI passthrough you get more or less hw speed
     (constrained by the OS), but need support from an external switch
     or the NIC itself to do forwarding between different ports.
   anything else ?

In terms of PCI passthrough / SR-IOV there are the emerging/competing
EVB and VEPA standards to allow VM<->VM communication to
go on the wire to a "real" switch, then back to the correct VM.

* any high-performance virtual switching solution around ?
   As mentioned, i have measured native linux bridging and in-kernel ovs
   and the numbers are above (not surprising; the tap involves a syscall
   on each packet if i am not mistaken, and internally you need a
   data copy)

You should probably compare to ESXi.  I've seen ~1Mpps going to or from
from 1..N VMs and in or out a port on a 10GbE interface with ESX4
and newer.

Drew
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to