This bug is awaiting verification that the linux-
azure-6.11/6.11.0-1012.12~24.04.1 kernel in -proposed solves the
problem. Please test the kernel and update this bug with the results. If
the problem is solved, change the tag 'verification-needed-noble-linux-
azure-6.11' to 'verification-done-noble-linux-azure-6.11'. If the
problem still exists, change the tag 'verification-needed-noble-linux-
azure-6.11' to 'verification-failed-noble-linux-azure-6.11'.


If verification is not done by 5 working days from today, this fix will
be dropped from the source code, and this bug will be closed.


See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how
to enable and use -proposed. Thank you!


** Tags added: kernel-spammed-noble-linux-azure-6.11-v2 
verification-needed-noble-linux-azure-6.11

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-nvidia-6.11 in Ubuntu.
https://bugs.launchpad.net/bugs/2089306

Title:
  vfio_pci soft lockup on VM start while using PCIe passthrough

Status in linux package in Ubuntu:
  Invalid
Status in linux-nvidia package in Ubuntu:
  Invalid
Status in linux-nvidia-6.11 package in Ubuntu:
  Invalid
Status in linux source package in Noble:
  Fix Released
Status in linux-nvidia source package in Noble:
  Fix Released
Status in linux-nvidia-6.11 source package in Noble:
  Fix Committed
Status in linux source package in Oracular:
  Fix Released

Bug description:
  When starting a VM with a passthrough PCIe device, the vfio_pci driver
  will block while its fault handler pre-faults the entire mapped area.
  For PCIe devices with large BAR regions this takes a very long time to
  complete, and thus causes soft lockup warnings on the host. This
  process can take hours with multiple passthrough large BAR region PCIe
  devices.

  This issue was introduced in kernel version 6.8.0-48-generic, with the
  addition of patches "vfio/pci: Use unmap_mapping_range()" and
  "vfio/pci: Insert full vma on mmap'd MMIO fault".

  The patch "vfio/pci: Use unmap_mapping_range()" rewrote the way VFIO
  tracks mapped regions to use the "vmf_insert_pfn" function instead of
  tracking them itself and using "io_remap_pfn_range". The
  implementation using "vmf_insert_pfn" is significantly slower.

  The patch "vfio/pci: Insert full vma on mmap'd MMIO fault" introduced
  this pre-faulting behavior, causing soft lockup warnings on the host
  while the VM launches.

  Without "vfio/pci: Insert full vma on mmap'd MMIO fault", a guest OS
  experiences significantly longer boot times as faults are generated
  while configuring the passthrough PCIe devices, but the host does not
  see soft lockup warnings.

  Both of these performance issues are resolved upstream by patchset
  [1], but this would be a complex backport to 6.8, with significant
  changes to core parts of the kernel.

  The "vfio/pci: Use unmap_mapping_range()" patch was introduced as part
  of patchset [2], and is intended to resolve a WARN_ON splat introduced
  by the upstream patch ba168b52bf8e ("mm: use rwsem assertion macros
  for mmap_lock"). However, this mmap_lock patch is not present in
  noble:linux, and hence noble:linux was never impacted by the WARN_ON
  issue.

  Thus, we can safely revert the following patches to resolve this VFIO 
slowdown:
  - "vfio/pci: Insert full vma on mmap'd MMIO fault"
  - "vfio/pci: Use unmap_mapping_range()"

  [1] https://patchwork.kernel.org/project/linux-mm/list/?series=883517
  [2] 
https://lore.kernel.org/all/20240530045236.1005864-3-alex.william...@redhat.com/

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2089306/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to