Hi Alex,
Thanks for the patch Alex. This would also require support in Qemu to expose 
the physical address to the VM. Are you looking at that part as well?

Regards
Varun

-----Original Message-----
From: iommu-boun...@lists.linux-foundation.org 
[mailto:iommu-boun...@lists.linux-foundation.org] On Behalf Of Alex Williamson
Sent: Saturday, October 10, 2015 12:11 AM
To: alex.william...@redhat.com
Cc: a...@scylladb.com; a...@cloudius-systems.com; g...@scylladb.com; 
m...@redhat.com; bruce.richard...@intel.com; cor...@lwn.net; 
linux-ker...@vger.kernel.org; alexander.du...@gmail.com; 
g...@cloudius-systems.com; step...@networkplumber.org; 
vl...@cloudius-systems.com; iommu@lists.linux-foundation.org; 
h...@hansjkoch.de; gre...@linuxfoundation.org
Subject: [RFC PATCH 0/2] VFIO no-iommu

Recent patches for UIO have been attempting to add MSI/X support, which 
unfortunately implies DMA support, which users have been enabling anyway, but 
was never intended for UIO.  VFIO on the other hand expects an IOMMU to provide 
isolation of devices, but provides a much more complete device interface, which 
already supports full MSI/X support.  There's really no way to support 
userspace drivers with DMA capable devices without an IOMMU to protect the 
host, but we can at least think about doing it in a way that properly taints 
the kernel and avoids creating new code duplicating existing code, that does 
have a supportable use case.

The diffstat is only so large because I moved vfio.c to vfio_core.c so I could 
more easily keep the module named vfio.ko while keeping the bulk of the 
no-iommu support in a separate file that can be optionally compiled.  We're 
really looking at a couple hundred lines of mostly stub code.  The 
VFIO_NOIOMMU_IOMMU could certainly be expanded to do page pinning and 
virt_to_bus() translation, but I didn't want to complicate anything yet.

I've only compiled this and tested loading the module with the new no-iommu 
mode enabled, I haven't actually tried to port a DPDK driver to it, though it 
ought to be a pretty obvious mix of the existing UIO and VFIO versions (set the 
IOMMU, but avoid using it for mapping, use however bus translations are done w/ 
UIO).  The core vfio device file is still /dev/vfio/vfio, but all the groups 
become /dev/vfio-noiommu/$GROUP.

It should be obvious, but I always feel obligated to state that this does not 
and will not ever enable device assignment to virtual machines on non-IOMMU 
capable platforms.

I'm curious what IOMMU folks think of this.  This hack is really only possible 
because we don't use iommu_ops for regular DMA, so we can hijack it fairly 
safely.  I believe that's intended to change though, so this may not be 
practical long term.  Thanks,

Alex

---

Alex Williamson (2):
      vfio: Move vfio.c vfio_core.c
      vfio: Include no-iommu mode


 drivers/vfio/Kconfig        |   15 
 drivers/vfio/Makefile       |    4 
 drivers/vfio/vfio.c         | 1640 ------------------------------------------
 drivers/vfio/vfio_core.c    | 1680 +++++++++++++++++++++++++++++++++++++++++++
 drivers/vfio/vfio_noiommu.c |  185 +++++
 drivers/vfio/vfio_private.h |   31 +
 include/uapi/linux/vfio.h   |    2 
 7 files changed, 1917 insertions(+), 1640 deletions(-)  delete mode 100644 
drivers/vfio/vfio.c  create mode 100644 drivers/vfio/vfio_core.c  create mode 
100644 drivers/vfio/vfio_noiommu.c  create mode 100644 
drivers/vfio/vfio_private.h _______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to