On Dec 10, 2013, at 12:17 PM, Wang, Baoyuan <baoyuan.w...@tekcomms.com> wrote:
> Thank you for your response. I could not find much information about OVS > patch port by google search. Most of them are talking about how to configure > it. Do you have any information related to the design/implementation other > than reading the code ? There's not a lot to describe on the implementation. Before 1.10, if you created two bridges in OVS, two datapaths would be created in the kernel. The patch port would create a port that you could send traffic to in one datapath and it would pop into the other datapath for processing. The implementation was very simple--it would just turn the send on one end into a receive on the other. In 1.10, we went to a single datapath model where regardless of how many bridges were created, they would share a single datapath in the kernel. With this model, we were able to optimize patch ports by having ovs-vswitchd figure out what would happen in both bridges, and then push down a single flow into the datapath. > I do have some version of OVS code with me (v1.9.3). As I mentioned before, the patch port optimization was introduced in 1.10. > It seems to me that OVS still has to work on multiple flow tables with patch > ports. It might save one loop comparing with veth pair, that is, patch port > directly uses the peer to work on peer's flow table instead of going the main > processing loop. Please correct me because I am not familiar with the detail > OVS design/implementation. My code research has been spot check. For > example, I only checked the files like vport-patch.c and vport.c. For > Telecom industry, those extra processing on every compute nodes for every > packet will add up quickly. The optimization saves an extra lookup in the kernel datapath and an extra trip to userspace to figure out what happens in the second bridge. --Justin _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss