PR raised for comment upstream:

https://github.com/openvswitch/ovs/pull/350

** Project changed: charm-ovn-chassis => openvswitch (Ubuntu)

** Changed in: openvswitch (Ubuntu)
       Status: New => Triaged

** Changed in: openvswitch (Ubuntu)
   Importance: Undecided => Medium

** Description changed:

  When OVS is deployed inside of a LXD container the amount of locked
  memory is limited by the configured system limits for the container -
  until recently this was very small (64K) but was bumped to 16M and then
  64M by recent changes into systemd.
  
  OVS will attempt to lock current and future memory allocations on
  startup - if this fails then subsequent memory allocations will not be
  locked but its not a fatal error - the daemons can still run, but there
  is a potential performance impact when memory contention occurs.
  
- @64K this happened reliably the the ovs-vswitchd daemon would run
+ @64K this happened reliably and the ovs-vswitchd daemon would run
  without locked memory.
  
  At 16M and 64M this is less clear cut - the initial mlockall call
  succeeds so the daemon runs with memory locking enabled for future
  malloc etc.. calls - if one fails then the daemon will abort.  On a
  modern server with many cores the amount of locked memory can quite
  easily exceed the now higher limits inside of a container.
  
  Outside of a container running as the root user the daemon has unlimited
  access to locked memory (to the physical memory limits of the server).
  
  Rather than completely disabling mlock when running in containers (which
  would be one approach to avoid this issue) it could be better to
  fallback to unlocked memory usage if the limit is hit.
  
  mmap and related calls would set EAGAIN when the limit is hit.

** Changed in: openvswitch (Ubuntu)
     Assignee: (unassigned) => James Page (james-page)

** Description changed:

  When OVS is deployed inside of a LXD container the amount of locked
  memory is limited by the configured system limits for the container -
  until recently this was very small (64K) but was bumped to 16M and then
  64M by recent changes into systemd.
  
  OVS will attempt to lock current and future memory allocations on
  startup - if this fails then subsequent memory allocations will not be
  locked but its not a fatal error - the daemons can still run, but there
  is a potential performance impact when memory contention occurs.
  
  @64K this happened reliably and the ovs-vswitchd daemon would run
  without locked memory.
  
  At 16M and 64M this is less clear cut - the initial mlockall call
  succeeds so the daemon runs with memory locking enabled for future
  malloc etc.. calls - if one fails then the daemon will abort.  On a
  modern server with many cores the amount of locked memory can quite
  easily exceed the now higher limits inside of a container.
  
  Outside of a container running as the root user the daemon has unlimited
  access to locked memory (to the physical memory limits of the server).
  
  Rather than completely disabling mlock when running in containers (which
  would be one approach to avoid this issue) it could be better to
  fallback to unlocked memory usage if the limit is hit.
  
  mmap and related calls would set EAGAIN when the limit is hit.
+ 
+ Related bug 1906280.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1920892

Title:
  locked memory allocation failure should fallback to unlocked memory

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1920892/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to