> From: Jason Gunthorpe
> Sent: Friday, April 8, 2022 3:08 AM
> On Thu, Apr 07, 2022 at 07:02:03PM +0100, Robin Murphy wrote:
> > On 2022-04-07 18:43, Jason Gunthorpe wrote:
> > > On Thu, Apr 07, 2022 at 06:03:37PM +0100, Robin Murphy wrote:
> > > > At a glance, this all looks about the right shap
> From: Robin Murphy
> Sent: Friday, April 8, 2022 6:12 PM
>
> On 2022-04-08 10:08, Tian, Kevin wrote:
> >> From: Jason Gunthorpe
> >> Sent: Friday, April 8, 2022 3:08 AM
> >> On Thu, Apr 07, 2022 at 07:02:03PM +0100, Robin Murphy wrote:
> >
> From: Robin Murphy
> Sent: Saturday, April 9, 2022 1:45 AM
>
> >
> > So, I suppose VFIO would want to attach/detatch on every vfio_device
> > individually and it would iterate over the group instead of doing a
> > list_first_entry() like above. This would not be hard to do in VFIO.
>
> It feel
> From: Lu Baolu
> Sent: Sunday, April 10, 2022 6:25 PM
>
> Some features require that a single device must be immutably isolated,
> even when hot plug is supported.
This reads confusing, as hotplug cannot be allowed in a singleton group.
What you actually meant suppose to be:
"Enabling certai
> From: Lu Baolu
> Sent: Sunday, April 10, 2022 6:25 PM
>
> Use below data structures for SVA implementation in the IOMMU core:
>
> - struct iommu_sva_ioas
> Represent the I/O address space shared with an application CPU address
> space. This structure has a 1:1 relationship with an mm_struc
> From: Lu Baolu
> Sent: Sunday, April 10, 2022 6:25 PM
>
> Use below data structures for SVA implementation in the IOMMU core:
>
> - struct iommu_sva_ioas
> Represent the I/O address space shared with an application CPU address
> space. This structure has a 1:1 relationship with an mm_struc
> From: Lu Baolu
> Sent: Sunday, April 10, 2022 6:25 PM
> +struct iommu_sva *
> +iommu_sva_bind_device(struct device *dev, struct mm_struct *mm, void
> *drvdata)
> +{
> + int ret = -EINVAL;
> + struct iommu_sva *handle;
> + struct iommu_domain *domain;
> + struct iommu_sva_ioas *io
> From: Lu Baolu
> Sent: Tuesday, April 12, 2022 1:09 PM
> On 2022/4/12 11:15, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Sunday, April 10, 2022 6:25 PM
> >
> >>
> >> This adds a flag in the iommu_group struct to indicate an immuta
> From: Lu Baolu
> Sent: Sunday, April 10, 2022 6:25 PM
> @@ -898,6 +941,20 @@ int iommu_group_add_device(struct iommu_group
> *group, struct device *dev)
> list_add_tail(&device->list, &group->devices);
> if (group->domain && !iommu_is_attach_deferred(dev))
> ret = __io
> From: Lu Baolu
> Sent: Saturday, April 9, 2022 8:51 PM
>
> On 2022/4/8 16:16, Tian, Kevin wrote:
> >> From: Jason Gunthorpe
> >> Sent: Thursday, April 7, 2022 11:24 PM
> >>
> >> IOMMU_CACHE means "normal DMA to this iommu_domain's IOV
> From: Jason Gunthorpe
> Sent: Tuesday, April 12, 2022 9:21 PM
>
> On Tue, Apr 12, 2022 at 09:13:27PM +0800, Lu Baolu wrote:
>
> > > > > btw as discussed in last version it is not necessarily to recalculate
> > > > > snoop control globally with this new approach. Will follow up to
> > > > > cle
> From: Lu Baolu
> Sent: Tuesday, April 12, 2022 9:03 PM
>
> On 2022/4/12 15:37, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Tuesday, April 12, 2022 1:09 PM
> >> On 2022/4/12 11:15, Tian, Kevin wrote:
> >>>> From:
> From: Lu Baolu
> Sent: Tuesday, April 12, 2022 8:53 PM
>
> >
> >> + if (!handle) {
> >> + ret = -ENOMEM;
> >> + goto out_put_ioas;
> >> + }
> >> +
> >> + /* The reference to ioas will be kept until domain free. */
> >> + domain = iommu_sva_alloc_domain(dev, ioas);
> >
> >
> From: Lu Baolu
> Sent: Wednesday, April 13, 2022 7:58 PM
> On 2022/4/13 7:36, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Tuesday, April 12, 2022 8:53 PM
> >>
> >>>
> >>>> +if (!handle) {
> >>>
> From: Jacob Pan
> Sent: Saturday, April 16, 2022 5:00 AM
>
> Hi zhangfei@foxmail.com,
>
> On Fri, 15 Apr 2022 19:52:03 +0800, "zhangfei@foxmail.com"
> wrote:
>
> > >>> A PASID might be still used even though it is freed on mm exit.
> > >>>
> > >>> process A:
> > >>> sva_bind(
> From: Lu Baolu
> Sent: Saturday, April 16, 2022 8:31 PM
>
> This field make the requests snoop processor caches irrespective of other
> attributes in the request or other fields in paging structure entries
> used to translate the request. The latest VT-d specification states that
> this field i
> From: Lu Baolu
> Sent: Saturday, April 16, 2022 8:31 PM
>
> The VT-d driver explicitly drains the pending page requests when a CPU
> page table (represented by a mm struct) is unbound from a PASID according
> to the procedures defined in the VT-d spec. Hence, there's no need to
> report the sto
> From: Lu Baolu
> Sent: Saturday, April 16, 2022 8:31 PM
>
> PRQ overflow may cause I/O throughput congestion, resulting in unnecessary
> degradation of IO performance. Appropriately increasing the length of PRQ
> can greatly reduce the occurrence of PRQ overflow. The count of maximum
> page req
> From: Lu Baolu
> Sent: Thursday, April 21, 2022 7:36 PM
>
> The latest VT-d specification states that the PGSNP field in the pasid
> table entry should be treated as Reserved(0) for implementations not
> supporting Snoop Control (SC=0 in the Extended Capability Register).
> This adds a check bef
> From: Lu Baolu
> Sent: Thursday, April 21, 2022 7:36 PM
>
> This field make the requests snoop processor caches irrespective of
> other attributes in the request or other fields in paging structure
> entries used to translate the request.
I think you want to first point out the fact that SVA w
> From: Lu Baolu
> Sent: Thursday, April 21, 2022 7:36 PM
>
> The page fault handling framework in the IOMMU core explicitly states
> that it doesn't handle PCI PASID Stop Marker and the IOMMU drivers must
> discard them before reporting faults. This handles Stop Marker messages
> in prq_event_th
> From: Lu Baolu
> Sent: Thursday, April 21, 2022 7:36 PM
>
> PRQ overflow may cause I/O throughput congestion, resulting in unnecessary
> degradation of I/O performance. Appropriately increasing the length of PRQ
> can greatly reduce the occurrence of PRQ overflow. The count of maximum
> page req
> From: Lu Baolu
> Sent: Friday, April 22, 2022 9:04 PM
>
> On 2022/4/22 10:47, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Thursday, April 21, 2022 7:36 PM
> >>
> >> The latest VT-d specification states that the PGSNP field in the pasid
> From: Lu Baolu
> Sent: Sunday, April 24, 2022 12:38 PM
>
> On 2022/4/24 11:37, Tian, Kevin wrote:
> >>> This should be rebased on top of Jason's enforce coherency series
> >>> instead of blindly setting it. No matter whether it's legacy mode
> &
> From: Jason Gunthorpe
> Sent: Thursday, April 28, 2022 11:11 PM
>
>
> > 3) "dynamic DMA windows" (DDW). The IBM IOMMU hardware allows for
> 2 IOVA
> > windows, which aren't contiguous with each other. The base addresses
> > of each of these are fixed, but the size of each window, the pagesiz
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
>
> Presented herewith is a series that extends IOMMUFD to have IOMMU
> hardware support for dirty bit in the IOPTEs.
>
> Today, AMD Milan (which been out for a year now) supports it while ARM
> SMMUv3.2+ alongside VT-D rev3.x are expec
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
>
> Add to iommu domain operations a set of callbacks to
> perform dirty tracking, particulary to start and stop
> tracking and finally to test and clear the dirty data.
to be consistent with other context, s/test/read/
>
> Drivers ar
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
>
> +static int __set_dirty_tracking_range_locked(struct iommu_domain
> *domain,
suppose anything using iommu_domain as the first argument should
be put in the iommu layer. Here it's more reasonable to use iopt
as the first argument or
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
[...]
> +
> +static int iommu_read_and_clear_dirty(struct iommu_domain *domain,
> + struct iommufd_dirty_data *bitmap)
In a glance this function and all previous helpers doesn't rely on any
iommufd objec
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
>
> Similar to .read_and_clear_dirty() use the page table
> walker helper functions and set DBM|RDONLY bit, thus
> switching the IOPTE to writeable-clean.
this should not be one-off if the operation needs to be
applied to IOPTE. Say a m
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:10 AM
>
> IOMMU advertises Access/Dirty bits if the extended capability
> DMAR register reports it (ECAP, mnemonic ECAP.SSADS). The first
> stage table, though, has not bit for advertising, unless referenced via
first-stage is compatible to C
> From: Jason Gunthorpe
> Sent: Friday, April 29, 2022 8:39 PM
>
> > >> * There's no capabilities API in IOMMUFD, and in this RFC each vendor
> tracks
> > >
> > > there was discussion adding device capability uAPI somewhere.
> > >
> > ack let me know if there was snippets to the conversation as I
> From: Jason Gunthorpe
> Sent: Tuesday, May 3, 2022 2:53 AM
>
> On Mon, May 02, 2022 at 12:11:07PM -0600, Alex Williamson wrote:
> > On Fri, 29 Apr 2022 05:45:20 +0000
> > "Tian, Kevin" wrote:
> > > > From: Joao Martins
> > > > 3) U
> From: Lu Baolu
> Sent: Thursday, May 5, 2022 9:07 AM
>
> In the attach_dev callback of the default domain ops, if the domain has
> been set force_snooping, but the iommu hardware of the device does not
> support SC(Snoop Control) capability, the callback should block it and
> return a correspon
> From: Lu Baolu
> Sent: Thursday, May 5, 2022 9:07 AM
>
> As domain->force_snooping only impacts the devices attached with the
> domain, there's no need to check against all IOMMU units. At the same
> time, for a brand new domain (hasn't been attached to any device), the
> force_snooping field c
> From: Lu Baolu
> Sent: Thursday, May 5, 2022 9:07 AM
>
> The IOMMU force snooping capability is not required to be consistent
> among all the IOMMUs anymore. Remove force snooping capability check
> in the IOMMU hot-add path and domain_update_iommu_snooping()
> becomes
> a dead code now.
>
> S
> From: Lu Baolu
> Sent: Thursday, May 5, 2022 9:07 AM
>
> As enforce_cache_coherency has been introduced into the
> iommu_domain_ops,
> the kernel component which owns the iommu domain is able to opt-in its
> requirement for force snooping support. The iommu driver has no need to
> hard code the
> From: Jason Gunthorpe
> Sent: Thursday, May 5, 2022 3:09 AM
>
> Once the group enters 'owned' mode it can never be assigned back to the
> default_domain or to a NULL domain. It must always be actively assigned to
worth pointing out that a NULL domain is not always translated to DMA
blocked on
> From: Joao Martins
> Sent: Thursday, May 5, 2022 6:07 PM
>
> On 5/5/22 08:42, Tian, Kevin wrote:
> >> From: Jason Gunthorpe
> >> Sent: Tuesday, May 3, 2022 2:53 AM
> >>
> >> On Mon, May 02, 2022 at 12:11:07PM -0600, Alex Williamson wrote:
>
> From: Jason Gunthorpe
> Sent: Thursday, May 5, 2022 11:33 PM
> > > /*
> > > - * If the group has been claimed already, do not re-attach the default
> > > - * domain.
> > > + * New drivers should support default domains and so the
> > > detach_dev() op
> > > + * will never be called. Otherwi
up_is_core_domain().
>
> iommu_attach_device() should trigger a WARN_ON if the group is attached
> as
> the caller is using the function wrong.
>
> Suggested-by: "Tian, Kevin"
> Signed-off-by: Jason Gunthorpe
Reviewed-by: Kevin Tian
> ---
> drivers/i
> From: Joao Martins
> Sent: Thursday, May 5, 2022 7:51 PM
>
> On 5/5/22 12:03, Tian, Kevin wrote:
> >> From: Joao Martins
> >> Sent: Thursday, May 5, 2022 6:07 PM
> >>
> >> On 5/5/22 08:42, Tian, Kevin wrote:
> >>>> F
> From: Jason Gunthorpe
> Sent: Thursday, May 5, 2022 9:55 PM
>
> On Thu, May 05, 2022 at 11:03:18AM +, Tian, Kevin wrote:
>
> > iiuc the purpose of 'write-protection' here is to capture in-fly dirty pages
> > in the said race window until unmap
> From: Jason Gunthorpe
> Sent: Thursday, May 5, 2022 10:08 PM
>
> On Thu, May 05, 2022 at 07:40:37AM +, Tian, Kevin wrote:
>
> > In concept this is an iommu property instead of a domain property.
>
> Not really, domains shouldn't be changing behaviors once
> From: Lu Baolu
> Sent: Friday, May 6, 2022 1:27 PM
> +
> +/*
> + * Set the page snoop control for a pasid entry which has been set up.
> + */
> +void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
> + struct device *dev, u32 pasid)
> +{
> +
> From: Baolu Lu
> Sent: Friday, May 6, 2022 1:57 PM
>
> On 2022/5/6 03:46, Steve Wahl wrote:
> > Increase DMAR_UNITS_SUPPORTED to support 64 sockets with 10 DMAR
> units
> > each, for a total of 640.
> >
> > If the available hardware exceeds DMAR_UNITS_SUPPORTED (previously
> set
> > to MAX_IO_A
> From: Rodel, Jorg
> Sent: Friday, May 6, 2022 3:11 PM
>
> On Fri, May 06, 2022 at 06:49:43AM +, Tian, Kevin wrote:
> > another nit: dmar is intel specific thus CONFIG_X86 is always true.
>
> There are Itanium systems which have DMAR units. Is that no longer
>
> From: David Woodhouse
> Sent: Friday, May 6, 2022 3:17 PM
>
> On Fri, 2022-05-06 at 06:49 +, Tian, Kevin wrote:
> > > From: Baolu Lu
> > >
> > > > --- a/include/linux/dmar.h
> > > > +++ b/include/linux/dmar.h
>
> From: David Gibson
> Sent: Friday, May 6, 2022 1:25 PM
>
> >
> > When the iommu_domain is created I want to have a
> > iommu-driver-specific struct, so PPC can customize its iommu_domain
> > however it likes.
>
> This requires that the client be aware of the host side IOMMU model.
> That's tru
> From: Lu Baolu
> Sent: Sunday, May 8, 2022 8:35 PM
>
> As domain->force_snooping only impacts the devices attached with the
> domain, there's no need to check against all IOMMU units. On the other
> hand, force_snooping could be set on a domain no matter whether it has
> been attached or not, a
> From: Jason Gunthorpe
> Sent: Tuesday, May 10, 2022 12:19 AM
>
> Once the group enters 'owned' mode it can never be assigned back to the
> default_domain or to a NULL domain. It must always be actively assigned to
> a current domain. If the caller hasn't provided a domain then the core
> must p
> From: Steve Wahl
> Sent: Friday, May 6, 2022 11:26 PM
>
> On Fri, May 06, 2022 at 08:12:11AM +, Tian, Kevin wrote:
> > > From: David Woodhouse
> > > Sent: Friday, May 6, 2022 3:17 PM
> > >
> > > On Fri, 2022-05-06 at 06:49 +0
> From: Jason Gunthorpe
> Sent: Friday, May 6, 2022 7:46 PM
>
> On Fri, May 06, 2022 at 03:51:40AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Thursday, May 5, 2022 10:08 PM
> > >
> > > On Thu, May 05, 2022 at 07:40:37AM +0
> From: Jason Gunthorpe
> Sent: Saturday, May 7, 2022 1:55 AM
>
> On Fri, May 06, 2022 at 05:44:11PM +0100, Robin Murphy wrote:
> >
> > So if it *is* a domain then I can call NULL->attach_dev() and...? ;)
>
> You can call iommu_group_set_domain(group, NULL) and it will work.
>
> As I said, it mu
> From: Jason Gunthorpe
> Sent: Tuesday, May 10, 2022 9:47 PM
>
> On Tue, May 10, 2022 at 01:38:26AM +, Tian, Kevin wrote:
>
> > > However, tt costs nothing to have dirty tracking as long as all iommus
> > > support it in the system - which seems to be the
> From: Joao Martins
> Sent: Tuesday, May 10, 2022 7:51 PM
>
> On 5/10/22 02:38, Tian, Kevin wrote:
> >> From: Jason Gunthorpe
> >> Sent: Friday, May 6, 2022 7:46 PM
> >>
> >> On Fri, May 06, 2022 at 03:51:40AM +, Tian, Kevin wrote:
> >&
> From: Jason Gunthorpe
> Sent: Monday, May 9, 2022 10:01 PM
>
> On Mon, May 09, 2022 at 04:01:52PM +1000, David Gibson wrote:
>
> > Which is why I'm suggesting that the base address be an optional
> > request. DPDK *will* care about the size of the range, so it just
> > requests that and gets t
> From: Jason Gunthorpe
> Sent: Wednesday, May 11, 2022 3:00 AM
>
> On Tue, May 10, 2022 at 05:12:04PM +1000, David Gibson wrote:
> > Ok... here's a revised version of my proposal which I think addresses
> > your concerns and simplfies things.
> >
> > - No new operations, but IOAS_MAP gets some n
> From: Steve Wahl
> Sent: Wednesday, May 11, 2022 3:07 AM
>
> On Tue, May 10, 2022 at 01:16:26AM +, Tian, Kevin wrote:
> > > From: Steve Wahl
> > > Sent: Friday, May 6, 2022 11:26 PM
> > >
> > > On Fri, May 06, 2022 at 08:12:11AM +, Ti
> From: Baolu Lu
> Sent: Wednesday, May 11, 2022 10:32 AM
>
> On 2022/5/10 22:02, Jason Gunthorpe wrote:
> > On Tue, May 10, 2022 at 02:17:29PM +0800, Lu Baolu wrote:
> >
> >> This adds a pair of common domain ops for this purpose and adds
> helpers
> >> to attach/detach a domain to/from a {devic
> From: Jason Gunthorpe
> Sent: Thursday, May 12, 2022 12:32 AM
>
> On Wed, May 11, 2022 at 03:15:22AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Wednesday, May 11, 2022 3:00 AM
> > >
> > > On Tue, May 10, 2022 at 05:12:04PM
> From: Baolu Lu
> Sent: Thursday, May 12, 2022 11:03 AM
>
> On 2022/5/11 22:53, Jason Gunthorpe wrote:
> >>> Also, given the current arrangement it might make sense to have a
> >>> struct iommu_domain_sva given that no driver is wrappering this in
> >>> something else.
> >> Fair enough. How abou
> From: Jason Gunthorpe
> Sent: Wednesday, May 11, 2022 12:55 AM
>
> This control causes the ARM SMMU drivers to choose a stage 2
> implementation for the IO pagetable (vs the stage 1 usual default),
> however this choice has no visible impact to the VFIO user. Further qemu
> never implemented thi
> From: Baolu Lu
> Sent: Thursday, May 12, 2022 1:17 PM
>
> On 2022/5/12 13:01, Tian, Kevin wrote:
> >> From: Baolu Lu
> >> Sent: Thursday, May 12, 2022 11:03 AM
> >>
> >> On 2022/5/11 22:53, Jason Gunthorpe wrote:
> >>>>>
> From: Jason Gunthorpe
> Sent: Tuesday, May 10, 2022 2:33 AM
>
> On Wed, May 04, 2022 at 01:57:05PM +0200, Joerg Roedel wrote:
> > On Wed, May 04, 2022 at 08:51:35AM -0300, Jason Gunthorpe wrote:
> > > Nicolin and Eric have been testing with this series on ARM for a long
> > > time now, it is not
> From: Jason Gunthorpe
> Sent: Friday, May 20, 2022 1:04 AM
>
> Since asserting dma ownership now causes the group to have its DMA
> blocked
> the iommu layer requires a working iommu. This means the dma_owner APIs
> cannot be used on the fake groups that VFIO creates. Test for this and
> avoid
> From: Steve Wahl
> Sent: Thursday, May 19, 2022 3:58 AM
>
> On Fri, May 13, 2022 at 10:09:46AM +0800, Baolu Lu wrote:
> > On 2022/5/13 07:12, Steve Wahl wrote:
> > > On Thu, May 12, 2022 at 10:13:09AM -0500, Steve Wahl wrote:
> > > > To support up to 64 sockets with 10 DMAR units each (640), mak
> From: Jacob Pan
> Sent: Thursday, May 19, 2022 2:21 AM
>
> IOMMU group maintains a PASID array which stores the associated IOMMU
> domains. This patch introduces a helper function to do domain to PASID
> look up. It will be used by TLB flush and device-PASID attach verification.
>
> Signed-off
> From: Jacob Pan
> Sent: Thursday, May 19, 2022 2:21 AM
>
> DMA mapping API is the de facto standard for in-kernel DMA. It operates
> on a per device/RID basis which is not PASID-aware.
>
> Some modern devices such as Intel Data Streaming Accelerator, PASID is
> required for certain work submis
> From: Tian, Kevin
> Sent: Monday, May 23, 2022 3:55 PM
>
> > From: Jacob Pan
> > +ioasid_t iommu_get_pasid_from_domain(struct device *dev, struct
> > iommu_domain *domain)
> > +{
> > + struct iommu_domain *tdomain;
> > + struct io
> From: Lu Baolu
> Sent: Thursday, May 19, 2022 3:21 PM
>
> Use this field to keep the number of supported PASIDs that an IOMMU
> hardware is able to support. This is a generic attribute of an IOMMU
> and lifting it into the per-IOMMU device structure makes it possible
> to allocate a PASID for d
> From: Lu Baolu
> Sent: Thursday, May 19, 2022 3:21 PM
>
> The current kernel DMA with PASID support is based on the SVA with a flag
> SVM_FLAG_SUPERVISOR_MODE. The IOMMU driver binds the kernel
> memory address
> space to a PASID of the device. The device driver programs the device with
> kerne
> From: Lu Baolu
> Sent: Thursday, May 19, 2022 3:21 PM
>
> The iommu_sva_domain represents a hardware pagetable that the IOMMU
> hardware could use for SVA translation. This adds some infrastructure
> to support SVA domain in the iommu common layer. It includes:
>
> - Add a new struct iommu_sva
> From: Baolu Lu
> Sent: Monday, May 23, 2022 3:13 PM
> > @@ -254,6 +259,7 @@ struct iommu_ops {
> > int (*def_domain_type)(struct device *dev);
> >
> > const struct iommu_domain_ops *default_domain_ops;
> > + const struct iommu_domain_ops *sva_domain_ops;
>
> Per Joerg's comment in ant
> From: Lu Baolu
> Sent: Thursday, May 19, 2022 3:21 PM
>
> The existing iommu SVA interfaces are implemented by calling the SVA
> specific iommu ops provided by the IOMMU drivers. There's no need for
> any SVA specific ops in iommu_ops vector anymore as we can achieve
> this through the generic
> From: Lu Baolu
> Sent: Thursday, May 19, 2022 3:21 PM
>
> These ops'es have been replaced with the dev_attach/detach_pasid domain
> ops'es. There's no need for them anymore. Remove them to avoid dead
> code.
>
> Signed-off-by: Lu Baolu
> Reviewed-by: Jean-Philippe Brucker
Reviewed-by: Kevin
> From: Jason Gunthorpe
> Sent: Tuesday, May 24, 2022 9:39 PM
>
> On Tue, May 24, 2022 at 09:39:52AM +, Tian, Kevin wrote:
> > > From: Lu Baolu
> > > Sent: Thursday, May 19, 2022 3:21 PM
> > >
> > > The iommu_sva_domain represents a hardware
> From: Jean-Philippe Brucker
> Sent: Tuesday, May 24, 2022 6:58 PM
>
> On Tue, May 24, 2022 at 10:22:28AM +, Tian, Kevin wrote:
> > > From: Lu Baolu
> > > Sent: Thursday, May 19, 2022 3:21 PM
> > >
> > > The existing iommu SVA interfaces a
> From: Jason Gunthorpe
> Sent: Thursday, May 26, 2022 2:27 AM
>
> On Wed, May 25, 2022 at 02:26:37PM +0800, Baolu Lu wrote:
> > On 2022/5/25 09:31, Nobuhiro Iwamatsu wrote:
> > > +static const struct iommu_ops visconti_atu_ops = {
> > > + .domain_alloc = visconti_atu_domain_alloc,
> > > + .probe_
> From: Jason Gunthorpe
> Sent: Monday, May 30, 2022 8:23 PM
>
> On Tue, May 24, 2022 at 08:17:27AM -0700, Jacob Pan wrote:
> > Hi Jason,
> >
> > On Tue, 24 May 2022 10:50:34 -0300, Jason Gunthorpe
> wrote:
> >
> > > On Wed, May 18, 2022 at 11:21:15AM -0700, Jacob Pan wrote:
> > > > DMA requests
> From: Jacob Pan
> Sent: Wednesday, June 1, 2022 1:30 AM
> > >
> > > In both cases the pasid is stored in the attach data instead of the
> > > domain.
> > >
> So during IOTLB flush for the domain, do we loop through the attach data?
Yes and it's required.
>
> > > DMA API pasid is no special fr
> From: Jacob Pan
> Sent: Wednesday, June 1, 2022 4:44 AM
>
> Hi Jason,
>
> On Tue, 31 May 2022 16:05:50 -0300, Jason Gunthorpe
> wrote:
>
> > On Tue, May 31, 2022 at 10:29:55AM -0700, Jacob Pan wrote:
> >
> > > The reason why I store PASID at IOMMU domain is for IOTLB flush within
> > > the d
> From: Jason Gunthorpe
> Sent: Wednesday, June 1, 2022 7:11 AM
>
> On Tue, May 31, 2022 at 10:22:32PM +0100, Robin Murphy wrote:
>
> > There are only 3 instances where we'll free a table while the domain is
> > live. The first is the one legitimate race condition, where two map requests
> > tar
> From: Lu Baolu
> Sent: Friday, May 27, 2022 2:30 PM
>
> The per-device device_domain_info data could be retrieved from the
> device itself. There's no need to search a global list.
>
> Signed-off-by: Lu Baolu
Reviewed-by: Kevin Tian
> ---
> drivers/iommu/intel/iommu.h | 2 --
> drivers/i
> From: Lu Baolu
> Sent: Friday, May 27, 2022 2:30 PM
>
> Use pci_get_domain_bus_and_slot() instead of searching the global list
> to retrieve the pci device pointer. This removes device_domain_list
> global list as there are no consumers anymore.
>
> Signed-off-by: Lu Baolu
Reviewed-by: Kevin
> From: Lu Baolu
> Sent: Friday, May 27, 2022 2:30 PM
>
> The IOMMU root table is allocated and freed in the IOMMU initialization
> code in static boot or hot-plug paths. There's no need for a spinlock.
s/hot-plug/hot-remove/
>
> Signed-off-by: Lu Baolu
Reviewed-by: Kevin Tian
> ---
> dri
> From: Lu Baolu
> Sent: Friday, May 27, 2022 2:30 PM
>
> The iommu->lock is used to protect the per-IOMMU domain ID resource.
> Move the spinlock acquisition/release into the helpers where domain
> IDs are allocated and freed. The device_domain_lock is irrelevant to
> domain ID resources, remove
> From: Lu Baolu
> Sent: Friday, May 27, 2022 2:30 PM
>
> The iommu->lock is used to protect the per-IOMMU pasid directory table
> and pasid table. Move the spinlock acquisition/release into the helpers
> to make the code self-contained.
>
> Signed-off-by: Lu Baolu
Reviewed-by: Kevin Tian , wi
> From: Lu Baolu
> Sent: Friday, May 27, 2022 2:30 PM
>
> When the IOMMU domain is about to be freed, it should not be set on any
> device. Instead of silently dealing with some bug cases, it's better to
> trigger a warning to report and fix any potential bugs at the first time.
>
> static vo
> From: Baolu Lu
> Sent: Wednesday, June 1, 2022 5:37 PM
>
> On 2022/6/1 09:43, Tian, Kevin wrote:
> >> From: Jacob Pan
> >> Sent: Wednesday, June 1, 2022 1:30 AM
> >>>> In both cases the pasid is stored in the attach data instead of the
> >&g
> From: Baolu Lu
> Sent: Wednesday, June 1, 2022 7:02 PM
>
> On 2022/6/1 17:28, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Friday, May 27, 2022 2:30 PM
> >>
> >> When the IOMMU domain is about to be freed, it should not be set on
> any
> From: Jean-Philippe Brucker
> Sent: Wednesday, May 25, 2022 3:30 PM
>
> On Wed, May 25, 2022 at 02:04:49AM +, Tian, Kevin wrote:
> > > From: Jean-Philippe Brucker
> > > Sent: Tuesday, May 24, 2022 6:58 PM
> > >
> > > On Tue, May
> From: Nicolin Chen
> Sent: Monday, June 6, 2022 2:19 PM
>
> Cases like VFIO wish to attach a device to an existing domain that was
> not allocated specifically from the device. This raises a condition
> where the IOMMU driver can fail the domain attach because the domain and
> device are incompa
> From: Nicolin Chen
> Sent: Monday, June 6, 2022 2:19 PM
>
> From: Jason Gunthorpe
>
> The KVM mechanism for controlling wbinvd is only triggered during
> kvm_vfio_group_add(), meaning it is a one-shot test done once the devices
> are setup.
It's not one-shot. kvm_vfio_update_coherency() is ca
> From: Nicolin Chen
> Sent: Monday, June 6, 2022 2:19 PM
>
> All devices in emulated_iommu_groups have pinned_page_dirty_scope
> set, so the update_dirty_scope in the first list_for_each_entry
> is always false. Clean it up, and move the "if update_dirty_scope"
> part from the detach_group_done r
> From: Jason Gunthorpe
> Sent: Wednesday, June 8, 2022 7:17 PM
>
> On Wed, Jun 08, 2022 at 08:28:03AM +, Tian, Kevin wrote:
> > > From: Nicolin Chen
> > > Sent: Monday, June 6, 2022 2:19 PM
> > >
> > > From: Jason Gunthorpe
> > &g
> From: Jason Gunthorpe
> Sent: Friday, June 10, 2022 7:53 AM
>
> On Thu, Jun 09, 2022 at 05:25:42PM +, Raj, Ashok wrote:
> >
> > On Tue, Jun 07, 2022 at 09:49:32AM +0800, Lu Baolu wrote:
> > > Use this field to keep the number of supported PASIDs that an IOMMU
> > > hardware is able to suppo
> From: Baolu Lu
> Sent: Friday, June 10, 2022 2:47 PM
>
> On 2022/6/10 03:01, Raj, Ashok wrote:
> > On Tue, Jun 07, 2022 at 09:49:33AM +0800, Lu Baolu wrote:
> >> @@ -218,6 +219,30 @@ static void dev_iommu_free(struct device *dev)
> >>kfree(param);
> >> }
> >>
> >> +static u32 dev_iommu_ge
> From: Lu Baolu
> Sent: Tuesday, June 14, 2022 11:44 AM
>
> This allows the upper layers to set a domain to a PASID of a device
> if the PASID feature is supported by the IOMMU hardware. The typical
> use cases are, for example, kernel DMA with PASID and hardware
> assisted mediated device drive
> From: Baolu Lu
> Sent: Tuesday, June 14, 2022 12:48 PM
>
> On 2022/6/14 12:02, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Tuesday, June 14, 2022 11:44 AM
> >>
> >> This allows the upper layers to set a domain to a PASID of a device
>
301 - 400 of 784 matches
Mail list logo