> From: Jacob Pan
> Sent: Tuesday, June 4, 2019 6:09 AM
>
> On Mon, 3 Jun 2019 15:57:47 +0100
> Jean-Philippe Brucker wrote:
>
> > +/**
> > + * struct iommu_fault_page_request - Page Request data
> > + * @flags: encodes whether the corresponding fields are valid and
> > whether this
> > + *
> From: Jacob Pan [mailto:jacob.jun@linux.intel.com]
> Sent: Thursday, June 6, 2019 1:38 AM
>
> On Wed, 5 Jun 2019 08:51:45 +0000
> "Tian, Kevin" wrote:
>
> > > From: Jacob Pan
> > > Sent: Tuesday, June 4, 2019 6:09 AM
> > >
>
> From: Liu, Yi L
> Sent: Saturday, June 27, 2020 2:53 PM
>
> Hi Robin,
>
> > From: Robin Murphy
> > Sent: Saturday, June 27, 2020 12:05 AM
> >
> > On 2020-06-26 08:47, Jean-Philippe Brucker wrote:
> > > On Wed, Jun 24, 2020 at 01:55:15AM -0700, Liu Yi L wrote:
> > >> IOMMUs that support nestin
> From: Liu, Yi L
> Sent: Monday, June 29, 2020 8:23 PM
>
> Hi Stefan,
>
> > From: Stefan Hajnoczi
> > Sent: Monday, June 29, 2020 5:25 PM
> >
> > On Wed, Jun 24, 2020 at 01:55:15AM -0700, Liu Yi L wrote:
> > > +/*
> > > + * struct iommu_nesting_info - Information for nesting-capable IOMMU.
> >
> From: Jacob Pan
> Sent: Tuesday, June 30, 2020 7:05 AM
>
> On Fri, 26 Jun 2020 16:19:23 -0600
> Alex Williamson wrote:
>
> > On Tue, 23 Jun 2020 10:03:53 -0700
> > Jacob Pan wrote:
> >
> > > IOMMU UAPI is newly introduced to support communications between
> > > guest virtual IOMMU and host IO
> From: Lu Baolu
> Sent: Thursday, June 25, 2020 3:26 PM
>
> On 2020/6/23 23:43, Jacob Pan wrote:
> > DevTLB flush can be used for both DMA request with and without PASIDs.
> > The former uses PASID#0 (RID2PASID), latter uses non-zero PASID for SVA
> > usage.
> >
> > This patch adds a check for P
> From: Lu Baolu
> Sent: Sunday, June 28, 2020 8:34 AM
>
> A pasid might be bound to a page table from a VM guest via the iommu
> ops.sva_bind_gpasid. In this case, when a DMA page fault is detected
> on the physical IOMMU, we need to inject the page fault request into
> the guest. After the gues
> From: Lu Baolu
> Sent: Sunday, June 28, 2020 8:34 AM
>
> After a page request is handled, software must response the device which
> raised the page request with the handling result. This is done through
> the iommu ops.page_response if the request was reported to outside of
> vendor iommu drive
> From: Lu Baolu
> Sent: Monday, July 6, 2020 8:26 AM
>
> It is refactored in two ways:
>
> - Make it global so that it could be used in other files.
>
> - Make bus/devfn optional so that callers could ignore these two returned
> values when they only want to get the coresponding iommu pointer.
> From: Lu Baolu
> Sent: Monday, July 6, 2020 8:26 AM
>
> There are several places in the code that need to get the pointers of
> svm and sdev according to a pasid and device. Add a helper to achieve
> this for code consolidation and readability.
>
> Signed-off-by: Lu Baolu
> ---
> drivers/iom
> From: Lu Baolu
> Sent: Monday, July 6, 2020 8:26 AM
>
> A pasid might be bound to a page table from a VM guest via the iommu
> ops.sva_bind_gpasid. In this case, when a DMA page fault is detected
> on the physical IOMMU, we need to inject the page fault request into
> the guest. After the guest
> From: Tian, Kevin
> Sent: Monday, July 6, 2020 9:30 AM
>
> > From: Lu Baolu
> > Sent: Monday, July 6, 2020 8:26 AM
> >
> > A pasid might be bound to a page table from a VM guest via the iommu
> > ops.sva_bind_gpasid. In this case, when a DMA page fault is
> From: Lu Baolu
> Sent: Monday, July 6, 2020 8:26 AM
>
> After a page request is handled, software must response the device which
> raised the page request with the handling result. This is done through
'response' is a noun.
> the iommu ops.page_response if the request was reported to outside
> From: Liu, Yi L
> Sent: Thursday, July 9, 2020 8:32 AM
>
> Hi Alex,
>
> > Alex Williamson
> > Sent: Thursday, July 9, 2020 3:55 AM
> >
> > On Wed, 8 Jul 2020 08:16:16 +
> > "Liu, Yi L" wrote:
> >
> > > Hi Alex,
> > >
> > > > From: Liu, Yi L < yi.l@intel.com>
> > > > Sent: Friday, Jul
> From: Liu, Yi L
> Sent: Thursday, July 9, 2020 10:08 AM
>
> Hi Kevin,
>
> > From: Tian, Kevin
> > Sent: Thursday, July 9, 2020 9:57 AM
> >
> > > From: Liu, Yi L
> > > Sent: Thursday, July 9, 2020 8:32 AM
> > >
> > > H
> From: Lu Baolu
> Sent: Thursday, July 9, 2020 3:06 PM
>
> A pasid might be bound to a page table from a VM guest via the iommu
> ops.sva_bind_gpasid. In this case, when a DMA page fault is detected
> on the physical IOMMU, we need to inject the page fault request into
> the guest. After the gue
> From: Lu Baolu
> Sent: Thursday, July 9, 2020 3:06 PM
>
> After page requests are handled, software must respond to the device
> which raised the page request with the result. This is done through
> the iommu ops.page_response if the request was reported to outside of
> vendor iommu driver thro
> From: Lu Baolu
> Sent: Friday, July 10, 2020 1:37 PM
>
> Hi Kevin,
>
> On 2020/7/10 10:42, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Thursday, July 9, 2020 3:06 PM
> >>
> >> After page requests are handled, software must respond
> From: Lu Baolu
> Sent: Wednesday, July 15, 2020 9:00 AM
>
> Hi Christoph and Jacob,
>
> On 7/15/20 12:29 AM, Jacob Pan wrote:
> > On Tue, 14 Jul 2020 09:25:14 +0100
> > Christoph Hellwig wrote:
> >
> >> On Tue, Jul 14, 2020 at 01:57:03PM +0800, Lu Baolu wrote:
> >>> Replace iommu_aux_at(de)ta
> From: Alex Williamson
> Sent: Wednesday, July 29, 2020 3:20 AM
>
[...]
> > +
> > +For example, IOTLB invalidations should always succeed. There is no
> > +architectural way to report back to the vIOMMU if the UAPI data is
> > +incompatible. If that happens, in order to guarantee IOMMU iosolatio
> From: Alex Williamson
> Sent: Thursday, July 30, 2020 4:04 AM
>
> On Thu, 16 Jul 2020 09:07:46 +0800
> Lu Baolu wrote:
>
> > Hi Jacob,
> >
> > On 7/16/20 12:01 AM, Jacob Pan wrote:
> > > On Wed, 15 Jul 2020 08:47:36 +0800
> > > Lu Baolu wrote:
> > >
> > >> Hi Jacob,
> > >>
> > >> On 7/15/20
> From: Alex Williamson
> Sent: Thursday, July 30, 2020 4:25 AM
>
> On Tue, 14 Jul 2020 13:57:02 +0800
> Lu Baolu wrote:
>
> > The device driver needs an API to get its aux-domain. A typical usage
> > scenario is:
> >
> > unsigned long pasid;
> > struct iommu_domain *domain;
> >
> From: Alex Williamson
> Sent: Friday, July 31, 2020 4:17 AM
>
> On Wed, 29 Jul 2020 23:49:20 +0000
> "Tian, Kevin" wrote:
>
> > > From: Alex Williamson
> > > Sent: Thursday, July 30, 2020 4:25 AM
> > >
> > > On Tue, 14 Jul 202
> From: Tian, Kevin
> Sent: Friday, July 31, 2020 8:26 AM
>
> > From: Alex Williamson
> > Sent: Friday, July 31, 2020 4:17 AM
> >
> > On Wed, 29 Jul 2020 23:49:20 +
> > "Tian, Kevin" wrote:
> >
> > > > From: Alex Williamso
> From: Lu Baolu
> Sent: Wednesday, August 26, 2020 10:58 AM
>
> The VT-d spec requires (10.4.4 Global Command Register, GCMD_REG
> General
> Description) that:
>
> If multiple control fields in this register need to be modified, software
> must serialize the modifications through multiple writes
> From: Lu Baolu
> Sent: Thursday, August 27, 2020 12:25 PM
>
> The VT-d spec requires (10.4.4 Global Command Register, GCMD_REG
> General
> Description) that:
>
> If multiple control fields in this register need to be modified, software
> must serialize the modifications through multiple writes
> From: Lu Baolu
> Sent: Friday, August 28, 2020 8:06 AM
>
> The VT-d spec requires (10.4.4 Global Command Register, GCMD_REG
> General
> Description) that:
>
> If multiple control fields in this register need to be modified, software
> must serialize the modifications through multiple writes to
> From: Lu Baolu
> Sent: Thursday, August 27, 2020 1:57 PM
>
> If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
> table, we could default to the device NUMA domain as fall back. This also
> benefits the vIOMMU use case where only a single vIOMMU is exposed,
> hence
> no RHS
> From: Lu Baolu
> Sent: Friday, September 4, 2020 9:03 AM
>
> If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
> table, we could default to the device NUMA domain as fall back. This could
> also benefit a vIOMMU use case where only single vIOMMU is exposed,
> hence
> no RHS
> From: Jason Gunthorpe
> Sent: Tuesday, March 15, 2022 7:18 AM
>
> On Mon, Mar 14, 2022 at 04:50:33PM -0600, Alex Williamson wrote:
>
> > > +/*
> > > + * The KVM_IOMMU type implies that the hypervisor will control the
> mappings
> > > + * rather than userspace
> > > + */
> > > +#define VFIO_KVM
> From: Jacob Pan
> Sent: Tuesday, March 15, 2022 1:07 PM
>
> Some modern accelerators such as Intel's Data Streaming Accelerator (DSA)
> require PASID in DMA requests to be operational. Specifically, the work
> submissions with ENQCMD on shared work queues require PASIDs. The use
> cases
> inclu
> From: Jacob Pan
> Sent: Tuesday, March 15, 2022 1:07 PM
>
> From: Lu Baolu
>
> An IOMMU domain represents an address space which can be attached by
> devices that perform DMA within a domain. However, for platforms with
> PASID capability the domain attachment needs be handled at device+PASID
> From: Jacob Pan
> Sent: Tuesday, March 15, 2022 1:07 PM
>
> On VT-d platforms with scalable mode enabled, devices issue DMA requests
> with PASID need to attach to the correct IOMMU domains.
> The attach operation involves the following:
> - programming the PASID into device's PASID table
> - t
> From: Jean-Philippe Brucker
> Sent: Tuesday, March 15, 2022 7:27 PM
>
> On Mon, Mar 14, 2022 at 10:07:06PM -0700, Jacob Pan wrote:
> > From: Lu Baolu
> >
> > An IOMMU domain represents an address space which can be attached by
> > devices that perform DMA within a domain. However, for platform
> From: Jacob Pan
> Sent: Tuesday, March 15, 2022 1:07 PM
> +static int intel_iommu_attach_dev_pasid(struct iommu_domain *domain,
> + struct device *dev, ioasid_t pasid)
> +{
> + struct dmar_domain *dmar_domain = to_dmar_domain(domain);
> + struct device
> From: Jason Gunthorpe
> Sent: Tuesday, March 15, 2022 10:33 PM
>
> On Mon, Mar 14, 2022 at 10:07:07PM -0700, Jacob Pan wrote:
> > + /*
> > +* Each domain could have multiple devices attached with shared or
> per
> > +* device PASIDs. At the domain level, we keep track of unique PASIDs
> From: Jacob Pan
> Sent: Tuesday, March 15, 2022 1:07 PM
>
> With the availability of a generic device-PASID-domain attachment API,
> there's no need to special case RID2PASID. Use the API to replace
> duplicated code.
>
> Signed-off-by: Jacob Pan
> ---
> drivers/iommu/intel/iommu.c | 18 ++-
> From: Jason Gunthorpe
> Sent: Tuesday, March 15, 2022 10:22 PM
>
> On Tue, Mar 15, 2022 at 11:16:41AM +, Robin Murphy wrote:
> > On 2022-03-15 05:07, Jacob Pan wrote:
> > > DMA mapping API is the de facto standard for in-kernel DMA. It operates
> > > on a per device/RID basis which is not P
> From: Jacob Pan
> Sent: Wednesday, March 16, 2022 5:24 AM
>
> Hi Jason,
>
> On Tue, 15 Mar 2022 14:05:07 -0300, Jason Gunthorpe
> wrote:
>
> > On Tue, Mar 15, 2022 at 09:31:35AM -0700, Jacob Pan wrote:
> >
> > > > IMHO it is a device mis-design of IDXD to require all DMA be PASID
> > > > tag
> From: Robin Murphy
> Sent: Tuesday, March 15, 2022 6:49 PM
>
> On 2022-03-14 19:44, Matthew Rosato wrote:
> > s390x will introduce an additional domain type that is used for
> > managing IOMMU owned by KVM. Define the type here and add an
> > interface for allocating a specified type vs the def
> From: Jason Gunthorpe
> Sent: Thursday, March 17, 2022 9:53 PM
>
> On Thu, Mar 17, 2022 at 05:47:36AM +, Tian, Kevin wrote:
> > > From: Robin Murphy
> > > Sent: Tuesday, March 15, 2022 6:49 PM
> > >
> > > On 2022-03-14 19:44, Matthew Rosato w
> From: Jacob Pan
> Sent: Thursday, March 17, 2022 5:02 AM
>
> Hi Kevin,
>
> On Wed, 16 Mar 2022 07:41:34 +, "Tian, Kevin"
> wrote:
>
> > > From: Jason Gunthorpe
> > > Sent: Tuesday, March 15, 2022 10:33 PM
> > >
&g
> From: Jason Gunthorpe
> Sent: Thursday, March 17, 2022 8:04 AM
>
> On Wed, Mar 16, 2022 at 10:23:26PM +, Luck, Tony wrote:
>
> > Kernel users (ring0) can supply any PASID when they use
> > the ENQCMDS instruction. Is that what you mean when you
> > say "real applications"?
>
> I'm not tal
> From: Jacob Pan
> Sent: Tuesday, March 15, 2022 1:07 PM
>
> The current in-kernel supervisor PASID support is based on the SVM/SVA
> machinery in SVA lib. The binding between a kernel PASID and kernel
> mapping has many flaws. See discussions in the link below.
>
> This patch enables in-kernel
> From: Jacob Pan
> Sent: Tuesday, March 15, 2022 1:07 PM
>
> In-kernel DMA with PASID should use DMA API now, remove supervisor
> PASID
> SVA support. Remove special cases in bind mm and page request service.
>
> Signed-off-by: Jacob Pan
so you removed all the references to SVM_FLAG_SUPERVISO
> From: Jacob Pan
> Sent: Tuesday, March 15, 2022 1:07 PM
The coverletter is [0/8] but here you actually have the 9th patch...
>
> From: Dave Jiang
>
> The idxd driver always gated the pasid enabling under a single knob and
> this assumption is incorrect. The pasid used for kernel operation c
> From: Jason Gunthorpe
> Sent: Tuesday, March 15, 2022 10:55 PM
>
> The first level iommu_domain has the 'type1' map and unmap and pins
> the pages. This is the 1:1 map with the GPA and ends up pinning all
> guest memory because the point is you don't want to take a memory pin
> on your performa
> From: David Stevens
> Sent: Wednesday, March 16, 2022 1:07 PM
>
> From: David Stevens
>
> Fall back to domain selective flush if the target address is not aligned
> to the mask being used for invalidation. This is necessary because page
using domain selective flush is a bit conservative. Wha
> From: Jason Gunthorpe
> Sent: Friday, March 18, 2022 9:46 PM
>
> On Fri, Mar 18, 2022 at 07:01:19AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Tuesday, March 15, 2022 10:55 PM
> > >
> > > The first level iommu_domain
> From: Jason Gunthorpe
> Sent: Friday, March 18, 2022 10:13 PM
>
> On Fri, Mar 18, 2022 at 02:23:57AM +, Tian, Kevin wrote:
>
> > Yes, that is another major part work besides the iommufd work. And
> > it is not compatible with KVM features which rely on th
> From: Lu Baolu
> Sent: Sunday, March 20, 2022 2:40 PM
>
> Use this field to save the pasid/ssid bits that a device is able to
> support with its IOMMU hardware. It is a generic attribute of a device
> and lifting it into the per-device dev_iommu struct makes it possible
> to allocate a PASID fo
> From: Lu Baolu
> Sent: Sunday, March 20, 2022 2:40 PM
>
> Add a new iommu domain type IOMMU_DOMAIN_SVA to represent an I/O
> page
> table which is shared from CPU host VA. Add a sva_cookie field in the
> iommu_domain structure to save the mm_struct which represent the CPU
> memory page table.
>
> From: Lu Baolu
> Sent: Sunday, March 20, 2022 2:40 PM
>
> Attaching an IOMMU domain to a PASID of a device is a generic operation
> for modern IOMMU drivers which support PASID-granular DMA address
> translation. Currently visible usage scenarios include (but not limited):
>
> - SVA
> - kern
> From: Lu Baolu
> Sent: Sunday, March 20, 2022 2:40 PM
>
> Add support for SVA domain allocation and provide an SVA-specific
> iommu_domain_ops.
>
> Signed-off-by: Lu Baolu
> ---
> include/linux/intel-iommu.h | 1 +
> drivers/iommu/intel/iommu.c | 12
> drivers/iommu/intel/svm.c
> From: Lu Baolu
> Sent: Sunday, March 20, 2022 2:40 PM
> +struct iommu_sva *
> +iommu_sva_bind_device(struct device *dev, struct mm_struct *mm, void
> *drvdata)
> +{
> + int ret = -EINVAL;
> + struct iommu_sva *handle;
> + struct iommu_domain *domain;
> +
> + handle = kzalloc(size
> From: Lu Baolu
> Sent: Sunday, March 20, 2022 2:40 PM
>
> The existing IOPF handling framework only handles the I/O page faults for
> SVA. Ginven that we are able to link iommu domain with each I/O page fault,
> we can now make the I/O page fault handling framework more general for
> more types
> From: Lu Baolu
> Sent: Monday, March 21, 2022 6:22 PM
> >> - if (features >= 0)
> >> + if (features >= 0) {
> >>info->pasid_supported = features | 1;
> >> + dev->iommu->pasid_bi
> From: Jean-Philippe Brucker
> Sent: Monday, March 21, 2022 7:36 PM
>
> On Sun, Mar 20, 2022 at 02:40:27PM +0800, Lu Baolu wrote:
> > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> > index c0966fc9b686..4f90b71c6f6e 100644
> > --- a/drivers/iommu/iommu.c
> > +++ b/drivers/iommu/iom
> From: Jean-Philippe Brucker
> Sent: Monday, March 21, 2022 7:42 PM
>
> Hi Kevin,
>
> On Mon, Mar 21, 2022 at 08:09:36AM +, Tian, Kevin wrote:
> > > From: Lu Baolu
> > > Sent: Sunday, March 20, 2022 2:40 PM
> > >
> > > The existing IO
> From: Jason Gunthorpe
> Sent: Monday, March 21, 2022 10:07 PM
>
> On Sat, Mar 19, 2022 at 07:51:31AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Friday, March 18, 2022 10:13 PM
> > >
> > > On Fri, Mar 18, 2022 at 02:23:
> From: Jean-Philippe Brucker
> Sent: Tuesday, March 22, 2022 6:06 PM
>
> On Tue, Mar 22, 2022 at 01:00:08AM +, Tian, Kevin wrote:
> > > From: Jean-Philippe Brucker
> > > Sent: Monday, March 21, 2022 7:42 PM
> > >
> > > Hi Kevin,
> >
> From: Jason Gunthorpe
> Sent: Wednesday, March 23, 2022 12:15 AM
>
> On Tue, Mar 22, 2022 at 09:29:23AM -0600, Alex Williamson wrote:
>
> > I'm still picking my way through the series, but the later compat
> > interface doesn't mention this difference as an outstanding issue.
> > Doesn't this
> From: Jason Wang
> Sent: Thursday, March 24, 2022 10:28 AM
>
> On Thu, Mar 24, 2022 at 10:12 AM Tian, Kevin wrote:
> >
> > > From: Jason Gunthorpe
> > > Sent: Wednesday, March 23, 2022 12:15 AM
> > >
> > > On Tue, Mar 22, 2022 at 09:2
> From: Jason Gunthorpe
> Sent: Thursday, March 24, 2022 2:16 AM
>
> On Tue, Mar 22, 2022 at 04:15:44PM -0600, Alex Williamson wrote:
>
> > > +int iopt_access_pages(struct io_pagetable *iopt, unsigned long iova,
> > > + unsigned long length, struct page **out_pages, bool write)
> >
> From: Jason Wang
> Sent: Thursday, March 24, 2022 10:57 AM
>
> On Thu, Mar 24, 2022 at 10:42 AM Tian, Kevin wrote:
> >
> > > From: Jason Wang
> > > Sent: Thursday, March 24, 2022 10:28 AM
> > >
> > > On Thu, Mar 24, 2022 at 10:12
> From: Jason Wang
> Sent: Thursday, March 24, 2022 11:51 AM
>
> > >
> >
> > In the end vfio type1 will be replaced by iommufd compat layer. With
> > that goal in mind iommufd has to inherit type1 behaviors.
>
> So the compatibility should be provided by the compat layer instead of
> the core io
> From: Jason Gunthorpe
> Sent: Thursday, March 24, 2022 4:34 AM
>
> On Wed, Mar 23, 2022 at 02:04:46PM -0600, Alex Williamson wrote:
> > On Wed, 23 Mar 2022 16:34:39 -0300
> > Jason Gunthorpe wrote:
> >
> > > On Wed, Mar 23, 2022 at 01:10:38PM -0600, Alex Williamson wrote:
> > > > On Fri, 18 Ma
> From: Jason Gunthorpe
> Sent: Thursday, March 24, 2022 6:55 AM
>
> On Wed, Mar 23, 2022 at 05:34:18PM -0300, Jason Gunthorpe wrote:
>
> > Stated another way, any platform that wires dev_is_dma_coherent() to
> > true, like all x86 does, must support IOMMU_CACHE and report
> > IOMMU_CAP_CACHE_CO
> From: Jason Gunthorpe
> Sent: Thursday, March 24, 2022 9:46 PM
>
> On Thu, Mar 24, 2022 at 07:25:03AM +, Tian, Kevin wrote:
>
> > Based on that here is a quick tweak of the force-snoop part (not compiled).
>
> I liked your previous idea better, that IOMMU_CA
> From: Jason Gunthorpe
> Sent: Friday, March 25, 2022 7:12 AM
>
> On Thu, Mar 24, 2022 at 04:04:03PM -0600, Alex Williamson wrote:
> > That's essentially what I'm trying to reconcile, we're racing both
> > to round out the compatibility interface to fully support QEMU, while
> > also updating QE
> From: David Stevens
> Sent: Tuesday, March 22, 2022 2:36 PM
>
> From: David Stevens
>
> Calculate the appropriate mask for non-size-aligned page selective
> invalidation. Since psi uses the mask value to mask out the lower order
> bits of the target address, properly flushing the iotlb require
> From: David Stevens
> Sent: Friday, March 25, 2022 3:43 PM
> On Fri, Mar 25, 2022 at 4:15 PM Zhang, Tina wrote:
> >
> >
> >
> > > -Original Message-
> > > From: iommu On Behalf Of
> > > Tian, Kevin
> > > Sent: Fr
> From: Tian, Kevin
> Sent: Friday, March 25, 2022 10:16 AM
>
> > From: Jason Gunthorpe
> > Sent: Thursday, March 24, 2022 9:46 PM
> >
> > On Thu, Mar 24, 2022 at 07:25:03AM +, Tian, Kevin wrote:
> >
> > > Based on that here is a qui
> From: Lu Baolu
> Sent: Tuesday, March 29, 2022 1:38 PM
>
> Some of the interfaces in the IOMMU core require that only a single
> kernel device driver controls the device in the IOMMU group. The
> existing method is to check the device count in the IOMMU group in
> the interfaces. This is unreli
> From: Jason Gunthorpe
> Sent: Tuesday, March 29, 2022 7:43 PM
>
> On Tue, Mar 29, 2022 at 08:42:13AM +, Tian, Kevin wrote:
>
> > btw I'm not sure whether this is what SVA requires. IIRC the problem with
> > SVA is because PASID TLP prefix is not counted
> From: Lu Baolu
> Sent: Wednesday, March 30, 2022 1:00 PM
> >
> > btw I'm not sure whether this is what SVA requires. IIRC the problem with
> > SVA is because PASID TLP prefix is not counted in PCI packet routing thus
> > a DMA target address with PASID might be treated as P2P if the address
> >
> From: Lu Baolu
> Sent: Tuesday, March 29, 2022 1:38 PM
>
> Use this field to save the pasid/ssid bits that a device is able to
> support with its IOMMU hardware. It is a generic attribute of a device
> and lifting it into the per-device dev_iommu struct makes it possible
> to allocate a PASID f
> From: Jason Gunthorpe
> Sent: Wednesday, March 30, 2022 7:58 PM
>
> On Wed, Mar 30, 2022 at 06:50:11AM +, Tian, Kevin wrote:
>
> > One thing that I'm not very sure is about DMA alias. Even when physically
> > there is only a single device within the grou
+Alex
> From: Tian, Kevin
> Sent: Wednesday, March 30, 2022 10:13 PM
>
> > From: Jason Gunthorpe
> > Sent: Wednesday, March 30, 2022 7:58 PM
> >
> > On Wed, Mar 30, 2022 at 06:50:11AM +, Tian, Kevin wrote:
> >
> > > One thing that I'm n
> From: David Gibson
> Sent: Thursday, March 31, 2022 12:36 PM
> > +
> > +/**
> > + * struct iommu_ioas_iova_ranges - ioctl(IOMMU_IOAS_IOVA_RANGES)
> > + * @size: sizeof(struct iommu_ioas_iova_ranges)
> > + * @ioas_id: IOAS ID to read ranges from
> > + * @out_num_iovas: Output total number of rang
> From: Jason Gunthorpe
> Sent: Wednesday, March 30, 2022 10:30 PM
>
> On Wed, Mar 30, 2022 at 02:12:57PM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Wednesday, March 30, 2022 7:58 PM
> > >
> > > On Wed, Mar 30, 2022 at 06:50:
> From: Jason Gunthorpe
> Sent: Thursday, March 31, 2022 3:02 AM
>
> On Tue, Mar 29, 2022 at 01:37:52PM +0800, Lu Baolu wrote:
> > @@ -95,6 +101,7 @@ struct iommu_domain {
> > void *handler_token;
> > struct iommu_domain_geometry geometry;
> > struct iommu_dma_cookie *iova_cookie;
> >
> From: Jason Gunthorpe
> Sent: Wednesday, April 6, 2022 6:58 AM
>
> On Tue, Apr 05, 2022 at 01:50:36PM -0600, Alex Williamson wrote:
> > >
> > > +static bool intel_iommu_enforce_cache_coherency(struct
> iommu_domain *domain)
> > > +{
> > > + struct dmar_domain *dmar_domain = to_dmar_domain(domai
> From: Tian, Kevin
> Sent: Wednesday, April 6, 2022 7:32 AM
>
> > From: Jason Gunthorpe
> > Sent: Wednesday, April 6, 2022 6:58 AM
> >
> > On Tue, Apr 05, 2022 at 01:50:36PM -0600, Alex Williamson wrote:
> > > >
> > > > +static bool in
> From: Jason Gunthorpe
> Sent: Sunday, April 3, 2022 7:32 AM
>
> On Sat, Apr 02, 2022 at 08:43:16AM +, Tian, Kevin wrote:
>
> > > This assumes any domain is interchangeable with any device, which is
> > > not the iommu model. We need a domain op to check
> From: Jason Gunthorpe
> Sent: Wednesday, April 6, 2022 9:24 AM
>
> On Wed, Apr 06, 2022 at 01:00:13AM +, Tian, Kevin wrote:
>
> > > Because domains wrap more than just the IOPTE format, they have
> > > additional data related to the IOMMU HW block its
er arches are certainly welcome to implement
> enforce_cache_coherency(), it is not clear there is any benefit in doing
> so.
>
> After this series there are only two calls left to iommu_capable() with a
> bus argument which should help Robin's work here.
>
> T
> From: Jason Gunthorpe
> Sent: Wednesday, April 6, 2022 3:29 AM
>
> On Tue, Apr 05, 2022 at 01:10:44PM -0600, Alex Williamson wrote:
> > On Tue, 5 Apr 2022 13:16:01 -0300
> > Jason Gunthorpe wrote:
> >
> > > dev_is_dma_coherent() is the control to determine if IOMMU_CACHE can
> be
> > > suppor
> From: Jason Gunthorpe
> Sent: Wednesday, April 6, 2022 12:16 AM
>
> This new mechanism will replace using IOMMU_CAP_CACHE_COHERENCY
> and
> IOMMU_CACHE to control the no-snoop blocking behavior of the IOMMU.
>
> Currently only Intel and AMD IOMMUs are known to support this
> feature. They both
> From: Lu Baolu
> Sent: Wednesday, April 6, 2022 6:02 PM
>
> Hi Kevin,
>
> On 2022/4/2 15:12, Tian, Kevin wrote:
> >>>> Add a flag to the group that positively indicates the group can never
> >>>> have more than one member, even after hot plug. e
> From: Lu Baolu
> Sent: Wednesday, April 6, 2022 7:03 PM
>
> On 2022/4/6 18:44, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Wednesday, April 6, 2022 6:02 PM
> >>
> >> Hi Kevin,
> >>
> >> On 2022/4/2 15:12, Tian, Kevin wro
> From: Robin Murphy
> Sent: Wednesday, April 6, 2022 8:32 PM
>
> On 2022-04-06 06:58, Tian, Kevin wrote:
> >> From: Jason Gunthorpe
> >> Sent: Wednesday, April 6, 2022 9:24 AM
> >>
> >> On Wed, Apr 06, 2022 at 01:00:13AM +, Tian, Kevin wrote:
> From: Jason Gunthorpe
> Sent: Thursday, April 7, 2022 1:17 AM
>
> On Wed, Apr 06, 2022 at 06:10:31PM +0200, Christoph Hellwig wrote:
> > On Wed, Apr 06, 2022 at 01:06:23PM -0300, Jason Gunthorpe wrote:
> > > On Wed, Apr 06, 2022 at 05:50:56PM +0200, Christoph Hellwig wrote:
> > > > On Wed, Apr
> From: Christoph Hellwig
> Sent: Thursday, April 7, 2022 2:26 PM
>
> All drivers that implement get_resv_regions just use
> generic_put_resv_regions to implement the put side. Remove the
> indirections and document the allocations constraints.
>
Looks no document after removal:
> void iommu_
> From: Christoph Hellwig
> Sent: Thursday, April 7, 2022 2:26 PM
>
> Fold the arm_smmu_dev_has_feature arm_smmu_dev_feature_enabled
> into
> the main methods.
>
> Signed-off-by: Christoph Hellwig
> ---
> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 57 ++---
> 1 file changed,
> From: Jason Gunthorpe
> Sent: Thursday, April 7, 2022 11:24 PM
>
> This new mechanism will replace using IOMMU_CAP_CACHE_COHERENCY
> and
> IOMMU_CACHE to control the no-snoop blocking behavior of the IOMMU.
>
> Currently only Intel and AMD IOMMUs are known to support this
> feature. They both
> From: Jason Gunthorpe
> Sent: Thursday, April 7, 2022 11:24 PM
>
> IOMMU_CACHE means "normal DMA to this iommu_domain's IOVA should
> be cache
> coherent" and is used by the DMA API. The definition allows for special
> non-coherent DMA to exist - ie processing of the no-snoop flag in PCIe
> TLP
(CC Jason Wang)
> From: Jason Gunthorpe
> Sent: Thursday, April 7, 2022 11:24 PM
>
> While the comment was correct that this flag was intended to convey the
> block no-snoop support in the IOMMU, it has become widely implemented
> and
> used to mean the IOMMU supports IOMMU_CACHE as a map flag. O
> From: Jason Gunthorpe
> Sent: Thursday, April 7, 2022 11:24 PM
>
> IOMMU_CACHE means that normal DMAs do not require any additional
> coherency
> mechanism and is the basic uAPI that VFIO exposes to userspace. For
> instance VFIO applications like DPDK will not work if additional coherency
> op
> From: Tian, Kevin
> Sent: Friday, April 8, 2022 4:06 PM
>
> > From: Jason Gunthorpe
> > Sent: Thursday, April 7, 2022 11:24 PM
> >
> > This new mechanism will replace using IOMMU_CAP_CACHE_COHERENCY
> > and
> > IOMMU_CACHE to contro
201 - 300 of 784 matches
Mail list logo