or maybe fewer frames per ring. The 8MB per ring uniformly set on each port
could be a rigid setting.
Ideally the ring size should be a function of the need, which is the amount
of netlink traffic for the port using that socket.

Kais


On Wed, Dec 4, 2013 at 9:20 AM, Thomas Graf <tg...@redhat.com> wrote:

> On 12/04/2013 05:33 PM, Ben Pfaff wrote:
>
>> If I'm doing the calculations correctly, this mmaps 8 MB per ring-based
>> Netlink socket on a system with 4 kB pages.  OVS currently creates one
>> Netlink socket for each datapath port.  With 1000 ports (a moderate
>> number; we sometimes test with more), that is 8 GB of address space.  On
>> a 32-bit architecture that is impossible.  On a 64-bit architecture it
>> is possible but it may reserve an actual 8 GB of RAM: OVS often runs
>> with mlockall() since it is something of a soft real-time system (users
>> don't want their packet delivery delayed to page data back in).
>>
>> Do you have any thoughts about this issue?
>>
>
> That's certainly a problem. I had the impression that the changes that
> allow to consolidate multiple bridges to a single DP would minimize the
> number of DPs used.
>
> How about we limit the number of mmaped sockets to a configurable
> maximum that defaults to 16 or 32?
>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to