> From: Jason Gunthorpe
> Sent: Wednesday, July 10, 2024 1:37 AM
>
> On Mon, Jul 01, 2024 at 01:55:12PM +0800, Baolu Lu wrote:
> > On 2024/6/29 5:17, Jason Gunthorpe wrote:
> > > On Sun, Jun 16, 2024 at 02:11:52PM +0800, Lu Baolu wrote:
> > > > +static int iommufd_fault_iopf_enable(struct iommufd
> From: Nicolin Chen
> Sent: Friday, July 5, 2024 7:19 AM
>
> On Thu, Jul 04, 2024 at 03:32:32PM +0800, Baolu Lu wrote:
> > On 2024/7/4 14:37, Tian, Kevin wrote:
> > > > From: Nicolin Chen
> > > > Sent: Thursday, July 4, 2024 1:36 PM
> > >
> From: Nicolin Chen
> Sent: Thursday, July 4, 2024 1:36 PM
>
> On Thu, Jul 04, 2024 at 10:59:45AM +0800, Baolu Lu wrote:
> > > On Tue, Jul 02, 2024 at 02:34:40PM +0800, Lu Baolu wrote:
> > >
> > > +enum iommu_fault_type {
> > > + IOMMU_FAULT_TYPE_HWPT_IOPF,
> > > + IOMMU_FAULT_TYPE_VIOMM
> From: Lu Baolu
> Sent: Sunday, June 16, 2024 2:12 PM
>
> This series implements the functionality of delivering IO page faults to
> user space through the IOMMUFD framework. One feasible use case is the
> nested translation. Nested translation is a hardware feature that
> supports two-stage tra
> From: Lu Baolu
> Sent: Sunday, June 16, 2024 2:12 PM
>
> Add a new IOMMU capability flag, IOMMU_CAP_USER_IOASID_TABLE, which
> indicates if the IOMMU driver supports user-managed PASID tables. In the
> iopf deliver path, if no attach handle found for the iopf PASID, roll
> back to RID domain wh
> From: Baolu Lu
> Sent: Sunday, June 9, 2024 3:23 PM
>
> On 6/7/24 5:30 PM, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Monday, May 27, 2024 12:05 PM
> >>
> >> Add iopf-capable hw page table attach/detach/replace helpers. The
> poi
> From: Jason Gunthorpe
> Sent: Wednesday, June 12, 2024 9:24 PM
>
> On Fri, Jun 07, 2024 at 09:17:28AM +, Tian, Kevin wrote:
> > > From: Lu Baolu
> > > Sent: Monday, May 27, 2024 12:05 PM
> > >
> > > +static ssize_t iommufd_fault_
> From: Baolu Lu
> Sent: Thursday, June 6, 2024 2:28 PM
>
> On 6/5/24 4:28 PM, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Monday, May 27, 2024 12:05 PM
> >>
> >> +
> >> +/**
> >> + * struct iommu_hwpt_page_
> From: Baolu Lu
> Sent: Thursday, June 6, 2024 2:07 PM
>
> On 6/5/24 4:15 PM, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Monday, May 27, 2024 12:05 PM
> >>
> >> - list_for_each_entry(handle, &mm->iommu_mm->sva_handles,
> From: Lu Baolu
> Sent: Monday, May 27, 2024 12:05 PM
>
> When allocating a user iommufd_hw_pagetable, the user space is allowed to
> associate a fault object with the hw_pagetable by specifying the fault
> object ID in the page table allocation data and setting the
> IOMMU_HWPT_FAULT_ID_VALID f
> From: Lu Baolu
> Sent: Monday, May 27, 2024 12:05 PM
>
> Add iopf-capable hw page table attach/detach/replace helpers. The pointer
> to iommufd_device is stored in the domain attachment handle, so that it
> can be echo'ed back in the iopf_group.
this message needs an update. now the device poi
> From: Lu Baolu
> Sent: Monday, May 27, 2024 12:05 PM
>
> +static ssize_t iommufd_fault_fops_read(struct file *filep, char __user *buf,
> +size_t count, loff_t *ppos)
> +{
> + size_t fault_size = sizeof(struct iommu_hwpt_pgfault);
> + struct iommufd_fa
> From: Lu Baolu
> Sent: Monday, May 27, 2024 12:05 PM
>
> +
> +/**
> + * struct iommu_hwpt_page_response - IOMMU page fault response
> + * @size: sizeof(struct iommu_hwpt_page_response)
> + * @flags: Must be set to 0
> + * @dev_id: device ID of target device for the response
> + * @pasid: Proces
> From: Lu Baolu
> Sent: Monday, May 27, 2024 12:05 PM
>
> @@ -249,6 +249,12 @@ enum iommu_cap {
>*/
> IOMMU_CAP_DEFERRED_FLUSH,
> IOMMU_CAP_DIRTY_TRACKING, /* IOMMU supports dirty
> tracking */
> + /*
> + * IOMMU driver supports user-managed IOASID table. There
> From: Lu Baolu
> Sent: Monday, May 27, 2024 12:05 PM
>
> @@ -69,11 +68,16 @@ static struct iommu_mm_data
> *iommu_alloc_mm_data(struct mm_struct *mm, struct de
> */
> struct iommu_sva *iommu_sva_bind_device(struct device *dev, struct
> mm_struct *mm)
> {
> + struct iommu_group *group =
> From: Lu Baolu
> Sent: Monday, May 27, 2024 12:05 PM
>
> @@ -99,7 +99,9 @@ struct iommu_sva *iommu_sva_bind_device(struct device
> *dev, struct mm_struct *mm
>
> /* Search for an existing domain. */
> list_for_each_entry(domain, &mm->iommu_mm->sva_domains, next) {
> - r
> From: Jason Gunthorpe
> Sent: Friday, May 24, 2024 10:25 PM
>
> On Mon, May 20, 2024 at 03:39:54AM +, Tian, Kevin wrote:
> > > From: Baolu Lu
> > > Sent: Monday, May 20, 2024 10:19 AM
> > >
> > > On 5/15/24 4:50 PM, Tian, Kevin wrote:
> &
> From: Jason Gunthorpe
> Sent: Friday, May 24, 2024 10:15 PM
>
> On Mon, May 20, 2024 at 04:59:18AM +, Tian, Kevin wrote:
> > > From: Baolu Lu
> > > Sent: Monday, May 20, 2024 11:33 AM
> > >
> > > On 5/20/24 11:24 AM, Tian, Kevin wrote:
>
> From: Baolu Lu
> Sent: Monday, May 20, 2024 11:33 AM
>
> On 5/20/24 11:24 AM, Tian, Kevin wrote:
> >> From: Baolu Lu
> >> Sent: Sunday, May 19, 2024 10:38 PM
> >>
> >> On 2024/5/15 15:43, Tian, Kevin wrote:
> >>>>
> From: Baolu Lu
> Sent: Monday, May 20, 2024 10:19 AM
>
> On 5/15/24 4:50 PM, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Tuesday, April 30, 2024 10:57 PM
> >>
> >> @@ -308,6 +314,19 @@ int iommufd_hwpt_alloc(struct iommufd_
> From: Baolu Lu
> Sent: Monday, May 20, 2024 10:10 AM
>
> On 5/15/24 4:43 PM, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Tuesday, April 30, 2024 10:57 PM
> >> +
> >> +int iommufd_fault_domain_replace_dev(struct iommufd_device *i
> From: Baolu Lu
> Sent: Monday, May 20, 2024 9:34 AM
>
> On 5/15/24 4:37 PM, Tian, Kevin wrote:
> >> +
> >> + iopf_group_response(group, response.code);
> > PCIe spec states that a response failure disables the PRI interface. For SR-
> IOV
> &g
> From: Baolu Lu
> Sent: Monday, May 20, 2024 8:41 AM
>
> On 5/15/24 3:57 PM, Tian, Kevin wrote:
> >> From: Baolu Lu
> >> Sent: Wednesday, May 8, 2024 6:05 PM
> >>
> >> On 2024/5/8 8:11, Jason Gunthorpe wrote:
> >>> On Tue, Apr 30, 2
> From: Baolu Lu
> Sent: Sunday, May 19, 2024 10:38 PM
>
> On 2024/5/15 15:43, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Tuesday, April 30, 2024 10:57 PM
> >>
> >> iommu_hwpt_pgfaults represent fault messages that the userspace can
>
> From: Baolu Lu
> Sent: Sunday, May 19, 2024 10:04 PM
>
> On 2024/5/15 15:31, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Tuesday, April 30, 2024 10:57 PM
> >>
> >> + handle = iommu_attach_handle_get(dev->iommu_group, pasid, 0);
> &g
> From: Baolu Lu
> Sent: Sunday, May 19, 2024 6:14 PM
>
> On 5/15/24 3:21 PM, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Tuesday, April 30, 2024 10:57 PM
> >>
> >> #else
> >> -static inline struct iommu_sva *
> >> +stati
> From: Lu Baolu
> Sent: Tuesday, April 30, 2024 10:57 PM
>
> @@ -227,7 +233,7 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx
> *ictx,
> refcount_inc(&parent->common.obj.users);
> hwpt_nested->parent = parent;
>
> - hwpt->domain = ops->domain_alloc_user(idev->dev, flags,
> +
> From: Lu Baolu
> Sent: Tuesday, April 30, 2024 10:57 PM
> +
> +int iommufd_fault_domain_replace_dev(struct iommufd_device *idev,
> + struct iommufd_hw_pagetable *hwpt,
> + struct iommufd_hw_pagetable *old)
> +{
> + struct iomm
> From: Lu Baolu
> Sent: Tuesday, April 30, 2024 10:57 PM
>
> @@ -131,6 +131,9 @@ struct iopf_group {
> struct iommu_attach_handle *attach_handle;
> /* The device's fault data parameter. */
> struct iommu_fault_param *fault_param;
> + /* Used by handler provider to hook the
> From: Baolu Lu
> Sent: Wednesday, May 8, 2024 6:05 PM
>
> On 2024/5/8 8:11, Jason Gunthorpe wrote:
> > On Tue, Apr 30, 2024 at 10:57:06PM +0800, Lu Baolu wrote:
> >> diff --git a/drivers/iommu/iommu-priv.h b/drivers/iommu/iommu-priv.h
> >> index ae65e0b85d69..1a0450a83bd0 100644
> >> --- a/driv
> From: Lu Baolu
> Sent: Tuesday, April 30, 2024 10:57 PM
>
> iommu_hwpt_pgfaults represent fault messages that the userspace can
> retrieve. Multiple iommu_hwpt_pgfaults might be put in an iopf group,
> with the IOMMU_PGFAULT_FLAGS_LAST_PAGE flag set only for the last
> iommu_hwpt_pgfault.
Do y
> From: Lu Baolu
> Sent: Tuesday, April 30, 2024 10:57 PM
>
> Previously, the domain that a page fault targets is stored in an
> iopf_group, which represents a minimal set of page faults. With the
> introduction of attachment handle, replace the domain with the handle
It's better to use 'attach
> From: Jason Gunthorpe
> Sent: Saturday, May 11, 2024 12:29 AM
>
> On Fri, May 10, 2024 at 10:30:10PM +0800, Baolu Lu wrote:
>
> > diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> > index 35ae9a6f73d3..09b4e671dcee 100644
> > --- a/include/linux/iommu.h
> > +++ b/include/linux/iommu
> From: Lu Baolu
> Sent: Tuesday, April 30, 2024 10:57 PM
>
> #else
> -static inline struct iommu_sva *
> +static inline struct iommu_attach_handle *
> iommu_sva_bind_device(struct device *dev, struct mm_struct *mm)
> {
> - return NULL;
> + return ERR_PTR(-ENODEV);
> }
>
this should
> From: Lu Baolu
> Sent: Tuesday, April 30, 2024 10:57 PM
>
> +/* Add an attach handle to the group's pasid array. */
> +static struct iommu_attach_handle *
> +iommu_attach_handle_set(struct iommu_domain *domain,
> + struct iommu_group *group, ioasid_t pasid)
> +{
> + stru
> From: Baolu Lu
> Sent: Sunday, April 28, 2024 6:22 PM
>
> On 2024/4/10 7:48, Jason Gunthorpe wrote:
> > On Tue, Apr 09, 2024 at 10:11:28AM +0800, Baolu Lu wrote:
> >> On 4/8/24 10:19 PM, Jason Gunthorpe wrote:
> >>> On Sat, Apr 06, 2024 at 02:09:34PM +0800, Baolu Lu wrote:
> On 4/3/24 7:59
> From: Jason Gunthorpe
> Sent: Wednesday, April 10, 2024 7:38 AM
>
> On Tue, Apr 09, 2024 at 09:53:26AM +0800, Baolu Lu wrote:
>
> >The current code base doesn't yet support PASID attach/detach/replace
> >uAPIs. Therefore, above code is safe and reasonable. However, we will
> >need
> From: Baolu Lu
> Sent: Wednesday, February 21, 2024 3:21 PM
>
> On 2024/2/21 14:49, Tian, Kevin wrote:
> >>>> +struct iopf_attach_cookie {
> >>>> +struct iommu_domain *domain;
> >>>> +struct device *dev;
> >&g
> From: Baolu Lu
> Sent: Wednesday, February 21, 2024 1:53 PM
>
> On 2024/2/7 16:11, Tian, Kevin wrote:
> >> From: Lu Baolu
> >> Sent: Monday, January 22, 2024 3:39 PM
> >>
> >> There is a slight difference between iopf domains and non-iopf do
> From: Lu Baolu
> Sent: Monday, January 22, 2024 3:39 PM
>
> +
> +int iommufd_fault_iopf_handler(struct iopf_group *group)
> +{
> + struct iommufd_hw_pagetable *hwpt = group->cookie->domain-
> >fault_data;
> + struct iommufd_fault *fault = hwpt->fault;
> +
why not directly using iommufd
> From: Lu Baolu
> Sent: Monday, January 22, 2024 3:39 PM
>
> There is a slight difference between iopf domains and non-iopf domains.
> In the latter, references to domains occur between attach and detach;
> While in the former, due to the existence of asynchronous iopf handling
> paths, referenc
> From: Jason Gunthorpe
> Sent: Thursday, November 2, 2023 8:48 PM
>
> On Thu, Oct 26, 2023 at 10:49:24AM +0800, Lu Baolu wrote:
> > Hi folks,
> >
> > This series implements the functionality of delivering IO page faults to
> > user space through the IOMMUFD framework for nested translation.
> Nes
> From: Jason Gunthorpe
> Sent: Monday, October 23, 2023 11:43 PM
>
> On Mon, Oct 23, 2023 at 09:33:23AM -0600, Alex Williamson wrote:
>
> > > Alex,
> > > Are you fine to leave the provisioning of the VF including the control
> > > of its transitional capability in the device hands as was sugges
> From: Jason Gunthorpe
> Sent: Thursday, January 19, 2023 2:01 AM
>
> These contexts are sleepable, so use the proper annotation. The
> GFP_ATOMIC
> was added mechanically in the prior patches.
>
> Signed-off-by: Jason Gunthorpe
Reviewed-by: Kevin Tian
___
> From: Jason Gunthorpe
> Sent: Thursday, January 19, 2023 2:01 AM
>
> Flow it down to alloc_pgtable_page() via pfn_to_dma_pte() and
> __domain_mapping().
>
> Signed-off-by: Jason Gunthorpe
Reviewed-by: Kevin Tian
___
Virtualization mailing list
Vir
> From: Jason Gunthorpe
> Sent: Thursday, January 19, 2023 2:01 AM
>
> This is eventually called by iommufd through intel_iommu_map_pages() and
> it should not be forced to atomic. Push the GFP_ATOMIC to all callers.
>
> Signed-off-by: Jason Gunthorpe
Reviewed-by: Kevin Tian
_
> From: Jason Gunthorpe
> Sent: Thursday, January 19, 2023 2:01 AM
>
> iommufd follows the same design as KVM and uses memory cgroups to limit
> the amount of kernel memory a iommufd file descriptor can pin down. The
> various internal data structures already use GFP_KERNEL_ACCOUNT.
>
> However,
> From: Jason Gunthorpe
> Sent: Thursday, January 19, 2023 2:01 AM
>
> Change the sg_alloc_table_from_pages() allocation that was hardwired to
> GFP_KERNEL to use the gfp parameter like the other allocations in this
> function.
>
> Auditing says this is never called from an atomic context, so it
> From: Jason Gunthorpe
> Sent: Thursday, January 19, 2023 2:01 AM
>
> Follow the pattern for iommu_map() and remove iommu_map_sg_atomic().
>
> This allows __iommu_dma_alloc_noncontiguous() to use a GFP_KERNEL
> allocation here, based on the provided gfp flags.
>
> Signed-off-by: Jason Gunthorp
> From: Jason Gunthorpe
> Sent: Thursday, January 19, 2023 2:01 AM
>
> There is only one call site and it can now just pass the GFP_ATOMIC to the
> normal iommu_map().
>
> Signed-off-by: Jason Gunthorpe
Reviewed-by: Kevin Tian
___
Virtualization mai
> From: Jason Gunthorpe
> Sent: Thursday, January 19, 2023 2:01 AM
>
> The internal mechanisms support this, but instead of exposting the gfp to
> the caller it wrappers it into iommu_map() and iommu_map_atomic()
>
> Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
>
> Signed-of
> From: Jason Gunthorpe
> Sent: Tuesday, January 17, 2023 9:30 PM
>
> On Tue, Jan 17, 2023 at 03:35:08AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Saturday, January 7, 2023 12:43 AM
> > >
> > > @@ -2676,7 +2676,7 @@ st
> From: Jason Gunthorpe
> Sent: Saturday, January 7, 2023 12:43 AM
>
> @@ -2368,7 +2372,7 @@ static int iommu_domain_identity_map(struct
> dmar_domain *domain,
>
> return __domain_mapping(domain, first_vpfn,
> first_vpfn, last_vpfn - first_vpfn + 1,
> -
> From: Jason Gunthorpe
> Sent: Saturday, January 7, 2023 12:43 AM
>
> @@ -2676,7 +2676,7 @@ static int copy_context_table(struct intel_iommu
> *iommu,
> if (!old_ce)
> goto out;
>
> - new_ce = alloc_pgtable_page(iommu->node
> From: Nicolin Chen
> Sent: Wednesday, September 21, 2022 4:24 PM
>
> Following the new rules in include/linux/iommu.h kdocs, update all drivers
> ->attach_dev callback functions to return EINVAL in the failure paths that
> are related to domain incompatibility.
>
> Also, drop adjacent error pr
> From: Nicolin Chen
> Sent: Wednesday, September 21, 2022 4:23 PM
>
>
> +/**
> + * iommu_attach_device - Attach a device to an IOMMU domain
> + * @domain: IOMMU domain to attach
> + * @dev: Device that will be attached
> + *
> + * Returns 0 on success and error code on failure
> + *
> + * Note
> From: Jason Gunthorpe
> Sent: Wednesday, September 21, 2022 2:07 AM
>
> On Tue, Sep 20, 2022 at 06:38:18AM +, Tian, Kevin wrote:
>
> > Above lacks of a conversion in intel-iommu:
> >
> > intel_iommu_attach_device()
> > if
> From: Nicolin Chen
> Sent: Thursday, September 15, 2022 3:59 PM
>
> Following the new rules in include/linux/iommu.h kdocs, EINVAL now can be
> used to indicate that domain and device are incompatible by a caller that
> treats it as a soft failure and tries attaching to another domain.
>
> Eit
> From: Nicolin Chen
> Sent: Thursday, September 15, 2022 3:54 PM
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 1f2cd43cf9bc..51ef42b1bd4e 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -4158,19 +4158,15 @@ static int prepare_
> From: Nicolin Chen
> Sent: Thursday, September 15, 2022 3:54 PM
>
> +/**
> + * iommu_attach_device - Attach a device to an IOMMU domain
> + * @domain: IOMMU domain to attach
> + * @dev: Device that will be attached
> + *
> + * Returns 0 on success and error code on failure
> + *
> + * Note that
> From: Nicolin Chen
> Sent: Sunday, September 11, 2022 7:36 AM
>
> On Thu, Sep 08, 2022 at 09:08:38AM -0300, Jason Gunthorpe wrote:
> > On Thu, Sep 08, 2022 at 09:30:57AM +, Tian, Kevin wrote:
>
> > > There are also cases where common kAPIs are called in t
> From: Jason Gunthorpe
> Sent: Friday, September 9, 2022 8:08 PM
>
>
> > As discussed in a side thread a note might be added to exempt calling
> > kAPI outside of the iommu driver.
>
> Sadly, not really.. The driver is responsible to santize this if it is
> relevant. It is the main downside of
> From: Nicolin Chen
> Sent: Friday, September 9, 2022 11:17 AM
>
> On Thu, Sep 08, 2022 at 01:14:42PM -0300, Jason Gunthorpe wrote:
>
> > > I am wondering if this can be solved by better defining what the return
> > > codes mean and adjust the call-back functions to match the definition.
> > >
> From: Tian, Kevin
> Sent: Thursday, September 8, 2022 5:31 PM
> > This mixture of error codes is the basic reason why a new code was
> > used, because none of the existing codes are used with any
> > consistency.
>
> btw I saw the policy for -EBUSY is also
> From: Jason Gunthorpe
> Sent: Thursday, September 8, 2022 8:43 AM
>
> On Wed, Sep 07, 2022 at 08:41:13PM +0100, Robin Murphy wrote:
>
> > Again, not what I was suggesting. In fact the nature of
> iommu_attach_group()
> > already rules out bogus devices getting this far, so all a driver current
> From: Robin Murphy
> Sent: Thursday, June 30, 2022 4:22 PM
>
> On 2022-06-29 20:47, Nicolin Chen wrote:
> > On Fri, Jun 24, 2022 at 03:19:43PM -0300, Jason Gunthorpe wrote:
> >> On Fri, Jun 24, 2022 at 06:35:49PM +0800, Yong Wu wrote:
> >>
> > It's not used in VFIO context. "return 0" just
> From: Nicolin Chen
> Sent: Friday, June 24, 2022 4:00 AM
>
> Un-inline the domain specific logic from the attach/detach_group ops into
> two paired functions vfio_iommu_alloc_attach_domain() and
> vfio_iommu_detach_destroy_domain() that strictly deal with creating and
> destroying struct vfio_d
> From: Yong Wu
> Sent: Friday, June 24, 2022 1:39 PM
>
> On Thu, 2022-06-23 at 19:44 -0700, Nicolin Chen wrote:
> > On Fri, Jun 24, 2022 at 09:35:49AM +0800, Baolu Lu wrote:
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > On 2022/6/24 04:00, Nicolin Chen wrote:
>
> From: Robin Murphy
> Sent: Wednesday, June 22, 2022 3:55 PM
>
> On 2022-06-16 23:23, Nicolin Chen wrote:
> > On Thu, Jun 16, 2022 at 06:40:14AM +, Tian, Kevin wrote:
> >
> >>> The domain->ops validation was added, as a precaution, for mixed-
> dri
> From: Nicolin Chen
> Sent: Friday, June 17, 2022 6:41 AM
>
> > ...
> > > - if (resv_msi) {
> > > + if (resv_msi && !domain->msi_cookie) {
> > > ret = iommu_get_msi_cookie(domain->domain,
> > > resv_msi_base);
> > > if (ret && ret != -ENODEV)
> > >
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> Un-inline the domain specific logic from the attach/detach_group ops into
> two paired functions vfio_iommu_alloc_attach_domain() and
> vfio_iommu_detach_destroy_domain() that strictly deal with creating and
> destroying struct vfio_
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> All devices in emulated_iommu_groups have pinned_page_dirty_scope
> set, so the update_dirty_scope in the first list_for_each_entry
> is always false. Clean it up, and move the "if update_dirty_scope"
> part from the detach_group_don
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> The domain->ops validation was added, as a precaution, for mixed-driver
> systems. However, at this moment only one iommu driver is possible. So
> remove it.
It's true on a physical platform. But I'm not sure whether a virtual plat
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> From: Jason Gunthorpe
>
> The KVM mechanism for controlling wbinvd is based on OR of the coherency
> property of all devices attached to a guest, no matter those devices are
> attached to a single domain or multiple domains.
>
>
> From: Nicolin Chen
> Sent: Thursday, June 16, 2022 8:03 AM
>
> Cases like VFIO wish to attach a device to an existing domain that was
> not allocated specifically from the device. This raises a condition
> where the IOMMU driver can fail the domain attach because the domain and
> device are inc
> From: Nicolin Chen
> Sent: Wednesday, June 15, 2022 4:45 AM
>
> Hi Kevin,
>
> On Wed, Jun 08, 2022 at 11:48:27PM +, Tian, Kevin wrote:
> > > > > The KVM mechanism for controlling wbinvd is only triggered during
> > > > > kvm_vfio_group_ad
> From: Jason Gunthorpe
> Sent: Wednesday, June 8, 2022 7:17 PM
>
> On Wed, Jun 08, 2022 at 08:28:03AM +, Tian, Kevin wrote:
> > > From: Nicolin Chen
> > > Sent: Monday, June 6, 2022 2:19 PM
> > >
> > > From: Jason Gunthorpe
> > &g
> From: Nicolin Chen
> Sent: Monday, June 6, 2022 2:19 PM
>
> All devices in emulated_iommu_groups have pinned_page_dirty_scope
> set, so the update_dirty_scope in the first list_for_each_entry
> is always false. Clean it up, and move the "if update_dirty_scope"
> part from the detach_group_done r
> From: Nicolin Chen
> Sent: Monday, June 6, 2022 2:19 PM
>
> From: Jason Gunthorpe
>
> The KVM mechanism for controlling wbinvd is only triggered during
> kvm_vfio_group_add(), meaning it is a one-shot test done once the devices
> are setup.
It's not one-shot. kvm_vfio_update_coherency() is ca
> From: Nicolin Chen
> Sent: Monday, June 6, 2022 2:19 PM
>
> Cases like VFIO wish to attach a device to an existing domain that was
> not allocated specifically from the device. This raises a condition
> where the IOMMU driver can fail the domain attach because the domain and
> device are incompa
> From: Jason Gunthorpe
> Sent: Thursday, April 7, 2022 1:17 AM
>
> On Wed, Apr 06, 2022 at 06:10:31PM +0200, Christoph Hellwig wrote:
> > On Wed, Apr 06, 2022 at 01:06:23PM -0300, Jason Gunthorpe wrote:
> > > On Wed, Apr 06, 2022 at 05:50:56PM +0200, Christoph Hellwig wrote:
> > > > On Wed, Apr
> From: Jason Gunthorpe
> Sent: Wednesday, April 6, 2022 12:16 AM
>
> This new mechanism will replace using IOMMU_CAP_CACHE_COHERENCY
> and
> IOMMU_CACHE to control the no-snoop blocking behavior of the IOMMU.
>
> Currently only Intel and AMD IOMMUs are known to support this
> feature. They both
> From: Jason Gunthorpe
> Sent: Wednesday, April 6, 2022 3:29 AM
>
> On Tue, Apr 05, 2022 at 01:10:44PM -0600, Alex Williamson wrote:
> > On Tue, 5 Apr 2022 13:16:01 -0300
> > Jason Gunthorpe wrote:
> >
> > > dev_is_dma_coherent() is the control to determine if IOMMU_CACHE can
> be
> > > suppor
er arches are certainly welcome to implement
> enforce_cache_coherency(), it is not clear there is any benefit in doing
> so.
>
> After this series there are only two calls left to iommu_capable() with a
> bus argument which should help Robin's work here.
>
> T
> From: Tian, Kevin
> Sent: Wednesday, April 6, 2022 7:32 AM
>
> > From: Jason Gunthorpe
> > Sent: Wednesday, April 6, 2022 6:58 AM
> >
> > On Tue, Apr 05, 2022 at 01:50:36PM -0600, Alex Williamson wrote:
> > > >
> > > > +static bool in
> From: Jason Gunthorpe
> Sent: Wednesday, April 6, 2022 6:58 AM
>
> On Tue, Apr 05, 2022 at 01:50:36PM -0600, Alex Williamson wrote:
> > >
> > > +static bool intel_iommu_enforce_cache_coherency(struct
> iommu_domain *domain)
> > > +{
> > > + struct dmar_domain *dmar_domain = to_dmar_domain(domai
> From: Jean-Philippe Brucker
> Sent: Wednesday, October 13, 2021 8:11 PM
>
> Support identity domains, allowing to only enable IOMMU protection for a
> subset of endpoints (those assigned to userspace, for example). Users
> may enable identity domains at compile time
> (CONFIG_IOMMU_DEFAULT_PASS
> From: Jean-Philippe Brucker
> Sent: Monday, October 18, 2021 11:24 PM
>
> On Thu, Oct 14, 2021 at 03:00:38AM +, Tian, Kevin wrote:
> > > From: Jean-Philippe Brucker
> > > Sent: Wednesday, October 13, 2021 8:11 PM
> > >
> > > Support
> From: j...@8bytes.org
> Sent: Monday, October 18, 2021 7:38 PM
>
> On Thu, Oct 14, 2021 at 03:00:38AM +, Tian, Kevin wrote:
> > I saw a concept of deferred attach in iommu core. See iommu_is_
> > attach_deferred(). Currently this is vendor specific and I haven
> From: Tian, Kevin
> Sent: Thursday, October 14, 2021 11:25 AM
>
> > From: Jean-Philippe Brucker
> > Sent: Wednesday, October 13, 2021 8:11 PM
> >
> > The VIRTIO_IOMMU_F_BYPASS_CONFIG feature adds a new flag to the
> > ATTACH
> > request, that creat
> From: Jean-Philippe Brucker
> Sent: Wednesday, October 13, 2021 8:11 PM
>
> The VIRTIO_IOMMU_F_BYPASS_CONFIG feature adds a new flag to the
> ATTACH
> request, that creates a bypass domain. Use it to enable identity
> domains.
>
> When VIRTIO_IOMMU_F_BYPASS_CONFIG is not supported by the devic
> From: Jean-Philippe Brucker
> Sent: Wednesday, October 13, 2021 8:11 PM
>
> Support identity domains, allowing to only enable IOMMU protection for a
> subset of endpoints (those assigned to userspace, for example). Users
> may enable identity domains at compile time
> (CONFIG_IOMMU_DEFAULT_PASS
> From: Auger Eric
> Sent: Monday, March 15, 2021 3:52 PM
> To: Christoph Hellwig
> Cc: k...@vger.kernel.org; Will Deacon ; linuxppc-
> d...@lists.ozlabs.org; dri-de...@lists.freedesktop.org; Li Yang
> ; io...@lists.linux-foundation.org;
>
> Hi Christoph,
>
> On 3/14/21 4:58 PM, Christoph Hellwi
> From: Jacob Pan
> Sent: Thursday, March 4, 2021 2:29 AM
>
> Hi Vivek,
>
> On Fri, 15 Jan 2021 17:43:39 +0530, Vivek Gautam
> wrote:
>
> > From: Jean-Philippe Brucker
> >
> > Add support for tlb invalidation ops that can send invalidation
> > requests to back-end virtio-iommu when stage-1 pa
> From: Alex Williamson
> Sent: Wednesday, January 20, 2021 8:51 AM
>
> On Wed, 20 Jan 2021 00:14:49 +
> "Kasireddy, Vivek" wrote:
>
> > Hi Alex,
> >
> > > -Original Message-
> > > From: Alex Williamson
> > > Sent: Tuesday, January 19, 2021 7:40 AM
> > > To: Kasireddy, Vivek
> > >
> From: Jean-Philippe Brucker
> Sent: Saturday, February 29, 2020 1:26 AM
>
> Platforms without device-tree do not currently have a method for
> describing the vIOMMU topology. Provide a topology description embedded
> into the virtio device.
>
> Use PCI FIXUP to probe the config space early, bec
> From: Jean-Philippe Brucker
> Sent: Saturday, February 29, 2020 1:26 AM
>
> Platforms without device-tree do not currently have a method for
> describing the vIOMMU topology. Provide a topology description embedded
> into the virtio device.
>
> Use PCI FIXUP to probe the config space early, bec
> From: Jason Wang
> Sent: Thursday, January 16, 2020 8:42 PM
>
> Hi all:
>
> Based on the comments and discussion for mdev based hardware virtio
> offloading support[1]. A different approach to support vDPA device is
> proposed in this series.
Can you point to the actual link which triggered th
> From: Jason Wang
> Sent: Friday, January 17, 2020 11:03 AM
>
>
> On 2020/1/16 下午11:22, Jason Gunthorpe wrote:
> > On Thu, Jan 16, 2020 at 08:42:29PM +0800, Jason Wang wrote:
> >> vDPA device is a device that uses a datapath which complies with the
> >> virtio specifications with vendor specifi
> From: Jason Wang
> Sent: Wednesday, September 25, 2019 8:45 PM
>
>
> On 2019/9/25 下午5:09, Tian, Kevin wrote:
> >> From: Jason Wang [mailto:jasow...@redhat.com]
> >> Sent: Tuesday, September 24, 2019 9:54 PM
> >>
> >> This patch implements basi
1 - 100 of 128 matches
Mail list logo