On Tue, Dec 03, 2013 at 12:19:02PM +0100, Thomas Graf wrote: > Based on the initial patch by Cong Wang posted a couple of months > ago. > > This is the user space counterpart needed for the kernel patch > '[PATCH net-next 3/8] openvswitch: Enable memory mapped Netlink i/o' > > Allows the kernel to construct Netlink messages on memory mapped > buffers and thus avoids copying. The functionality is enabled on > sockets used for unicast traffic. > > Further optimizations are possible by avoiding the copy into the > ofpbuf after reading. > > Signed-off-by: Thomas Graf <tg...@redhat.com>
If I'm doing the calculations correctly, this mmaps 8 MB per ring-based Netlink socket on a system with 4 kB pages. OVS currently creates one Netlink socket for each datapath port. With 1000 ports (a moderate number; we sometimes test with more), that is 8 GB of address space. On a 32-bit architecture that is impossible. On a 64-bit architecture it is possible but it may reserve an actual 8 GB of RAM: OVS often runs with mlockall() since it is something of a soft real-time system (users don't want their packet delivery delayed to page data back in). Do you have any thoughts about this issue? _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev