Hi everyone:
        We are interested in testing the performance of the memnic driver 
posted at     http://dpdk.org/browse/memnic/refs/.   We want to compare its 
performance compared to other techniques to transfer packets between the guest 
and the kernel,  predominately for VM to VM transfers.

We have downloaded the memnic components and have got it running in a guest VM.

The question we hope this group might be able to help with is what would be the 
best way to processes the packets in the kernel to get a VM to VM transfer.

A couple options might be possible


1.       Common shared buffer between two VMs.  With some utility/code to 
switch TX & RX rings between the two VMs.

VM1 application --- memnic  ---  common shared memory buffer on the host --- 
memnic  ---  VM2 application

2.       Special purpose Kernel switching module

VM1 application --- memnic  ---  shared memory VM1  --- Kernel switching module 
 --- shared memory VM2  --- memnic  ---  VM2 application

3.       Existing Kernel switching module

VM1 application --- memnic  ---  shared memory VM1  --- existing Kernel 
switching module (e.g. OVS/linux Bridge/VETh pair)  --- shared memory VM2  --- 
memnic  ---  VM2 application

Can anyone recommend which approach might be best or easiest?   We would like 
to avoid writing much (or any) kernel code so if there are already any open 
source code or test utilities that provide one of these options or would be a 
good starting point to start from,  a pointer would be much appreciated.

Thanks in advance

                                                                                
                                                                                
John Joyce

Reply via email to