> On Feb 7, 2016, at 2:12 AM, Dragos Ilie <dragos.i...@bth.se> wrote: > > The explanation I've seen is that the OVS patch interface is optimized for > OpenvSwitch. I would like to understand what is being optimized. > > I've seen a reply on the OVS mailing list that says OVS patch ports are > implemented entirely inside OVS userspace. I don't understand how this is > done without a performance penalty. I've thought that as soon a VM sends a > packet to its vNIC, the packet will cross from user space to kernel space > over the TAP interface. Eventually the packet reaches the OVS bridge (br-int, > for example). If at that point the packet must be sent to next OVS bridge > over a patch port, does it mean it crosses back to user space? That would > incur a performance hit.
You are correct that that would incur a performance hit, but that's not how it's implemented. > I am hypothesizing that perhaps the patch port is just a configuration > construct to tell the OVS kernel module that the ports on two OVS bridges are > connected. Then, somehow the kernel module is able to forward the packets > between the two bridges more efficiently than over a veth pair. It would be > nice if somebody can confirm if this is the correct explanation or if there > is a better one. That's basically correct. Regardless of how many bridges you configure in OVS, the kernel module only ever instantiates one. If two bridges are connected by a patch port, when userspace is processing an incoming packet and it's determined that the packet crosses a patch port, then userspace also calculates what happens in the next bridge and pushes down a kernel flow that directly connects the two ports. This yields significantly better performance than a veth pair. You can see this yourself by just setting up a couple of bridges connected with patch ports, run some traffic, and then run "ovs-dpctl dump-flows". --Justin _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss