I made some tests with OpenStack Havana release with ML2 plugin and OVS agent.
Config on Compute node :
# ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.0.0
# uname -r
3.2.0-41-generic

I start two VM on the same compute node with an interface on a same
network segmentation.
By default, a veth pair is used between the Linux bridge (qbr) and the
OVS bridge (br-int) for each VM interface.
I set a simple netperf TCP test and I get a throughput of 2Gb/s.

I replay this test after change the veth interfaces by an internal OVS port.
The throughput increase to 13Gb/s.

A patch [1] was already done to use internal instead of a veth.

[1] https://review.openstack.org/#/c/46911/

Édouard.

On Tue, Dec 10, 2013 at 10:15 PM, Justin Pettit <jpet...@nicira.com> wrote:
> On Dec 10, 2013, at 12:17 PM, Wang, Baoyuan <baoyuan.w...@tekcomms.com> wrote:
>
>> Thank you for your response.  I could not find much information about OVS 
>> patch port by google search. Most of them are talking about how to configure 
>> it.  Do you have any information related to the design/implementation other 
>> than reading the code ?
>
> There's not a lot to describe on the implementation.  Before 1.10, if you 
> created two bridges in OVS, two datapaths would be created in the kernel.  
> The patch port would create a port that you could send traffic to in one 
> datapath and it would pop into the other datapath for processing.  The 
> implementation was very simple--it would just turn the send on one end into a 
> receive on the other.
>
> In 1.10, we went to a single datapath model where regardless of how many 
> bridges were created, they would share a single datapath in the kernel.  With 
> this model, we were able to optimize patch ports by having ovs-vswitchd 
> figure out what would happen in both bridges, and then push down a single 
> flow into the datapath.
>
>> I do have some version of OVS code with me (v1.9.3).
>
> As I mentioned before, the patch port optimization was introduced in 1.10.
>
>> It seems to me that OVS still has to work on multiple flow tables with patch 
>> ports.  It might save one loop comparing with veth pair, that is, patch port 
>> directly uses the peer to work on peer's flow table instead of going the 
>> main processing loop. Please correct me because I am not familiar with the 
>> detail OVS design/implementation.  My code research has been spot check. For 
>> example, I only checked the files like vport-patch.c and vport.c.   For 
>> Telecom industry, those extra processing on every compute nodes for every 
>> packet will add up quickly.
>
> The optimization saves an extra lookup in the kernel datapath and an extra 
> trip to userspace to figure out what happens in the second bridge.
>
> --Justin
>
>
> _______________________________________________
> discuss mailing list
> discuss@openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to