Re: [RFC PATCH 0/5] iommu: APIs for paravirtual PASID allocation

2018-11-19 Thread Yi Sun
On 18-11-19 11:36:48, Konrad Rzeszutek Wilk wrote:
> On Mon, Nov 12, 2018 at 02:44:56PM +0800, Lu Baolu wrote:
> > This adds an uniformed API set for global PASIDs used by IOMMU
> > and device drivers which depend on IOMMU. It works for drivers
> > running on bare metal, full virtualized environments and para-
> > virtualized environment.
> > 
> 
> Are there also some QEMU patches for this in RFC state?

Yes, we already have draft codes and are working on refining
them. Will send patches out for your review. Thanks!
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM

2021-02-07 Thread Yi Sun
Hi,

On 21-01-28 23:17:41, Keqian Zhu wrote:

[...]

> +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu,
> +  struct vfio_dma *dma)
> +{
> + struct vfio_domain *d;
> +
> + list_for_each_entry(d, &iommu->domain_list, next) {
> + /* Go through all domain anyway even if we fail */
> + iommu_split_block(d->domain, dma->iova, dma->size);
> + }
> +}

This should be a switch to prepare for dirty log start. Per Intel
Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry.
It enables Accessed/Dirty Flags in second-level paging entries.
So, a generic iommu interface here is better. For Intel iommu, it
enables SLADE. For ARM, it splits block.

> +
> +static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu,
> + struct vfio_dma *dma)
> +{
> + struct vfio_domain *d;
> +
> + list_for_each_entry(d, &iommu->domain_list, next) {
> + /* Go through all domain anyway even if we fail */
> + iommu_merge_page(d->domain, dma->iova, dma->size,
> +  d->prot | dma->prot);
> + }
> +}

Same as above comment, a generic interface is required here.

> +
> +static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start)
> +{
> + struct rb_node *n;
> +
> + /* Split and merge even if all iommu don't support HWDBM now */
> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +
> + if (!dma->iommu_mapped)
> + continue;
> +
> + /* Go through all dma range anyway even if we fail */
> + if (start)
> + vfio_dma_dirty_log_start(iommu, dma);
> + else
> + vfio_dma_dirty_log_stop(iommu, dma);
> + }
> +}
> +
>  static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu,
>   unsigned long arg)
>  {
> @@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct 
> vfio_iommu *iommu,
>   pgsize = 1 << __ffs(iommu->pgsize_bitmap);
>   if (!iommu->dirty_page_tracking) {
>   ret = vfio_dma_bitmap_alloc_all(iommu, pgsize);
> - if (!ret)
> + if (!ret) {
>   iommu->dirty_page_tracking = true;
> + vfio_iommu_dirty_log_switch(iommu, true);
> + }
>   }
>   mutex_unlock(&iommu->lock);
>   return ret;
> @@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct 
> vfio_iommu *iommu,
>   if (iommu->dirty_page_tracking) {
>   iommu->dirty_page_tracking = false;
>   vfio_dma_bitmap_free_all(iommu);
> + vfio_iommu_dirty_log_switch(iommu, false);
>   }
>   mutex_unlock(&iommu->lock);
>   return 0;
> -- 
> 2.19.1
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM

2021-02-09 Thread Yi Sun
On 21-02-07 18:40:36, Keqian Zhu wrote:
> Hi Yi,
> 
> On 2021/2/7 17:56, Yi Sun wrote:
> > Hi,
> > 
> > On 21-01-28 23:17:41, Keqian Zhu wrote:
> > 
> > [...]
> > 
> >> +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu,
> >> +   struct vfio_dma *dma)
> >> +{
> >> +  struct vfio_domain *d;
> >> +
> >> +  list_for_each_entry(d, &iommu->domain_list, next) {
> >> +  /* Go through all domain anyway even if we fail */
> >> +  iommu_split_block(d->domain, dma->iova, dma->size);
> >> +  }
> >> +}
> > 
> > This should be a switch to prepare for dirty log start. Per Intel
> > Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry.
> > It enables Accessed/Dirty Flags in second-level paging entries.
> > So, a generic iommu interface here is better. For Intel iommu, it
> > enables SLADE. For ARM, it splits block.
> Indeed, a generic interface name is better.
> 
> The vendor iommu driver plays vendor's specific actions to start dirty log, 
> and Intel iommu and ARM smmu may differ. Besides, we may add more actions in 
> ARM smmu driver in future.
> 
> One question: Though I am not familiar with Intel iommu, I think it also 
> should split block mapping besides enable SLADE. Right?
> 
I am not familiar with ARM smmu. :) So I want to clarify if the block
in smmu is big page, e.g. 2M page? Intel Vtd manages the memory per
page, 4KB/2MB/1GB. There are two ways to manage dirty pages.
1. Keep default granularity. Just set SLADE to enable the dirty track.
2. Split big page to 4KB to get finer granularity.

But question about the second solution is if it can benefit the user
space, e.g. live migration. If my understanding about smmu block (i.e.
the big page) is correct, have you collected some performance data to
prove that the split can improve performance? Thanks!

> Thanks,
> Keqian
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM

2021-02-09 Thread Yi Sun
On 21-02-09 11:16:08, Robin Murphy wrote:
> On 2021-02-07 09:56, Yi Sun wrote:
> >Hi,
> >
> >On 21-01-28 23:17:41, Keqian Zhu wrote:
> >
> >[...]
> >
> >>+static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu,
> >>+struct vfio_dma *dma)
> >>+{
> >>+   struct vfio_domain *d;
> >>+
> >>+   list_for_each_entry(d, &iommu->domain_list, next) {
> >>+   /* Go through all domain anyway even if we fail */
> >>+   iommu_split_block(d->domain, dma->iova, dma->size);
> >>+   }
> >>+}
> >
> >This should be a switch to prepare for dirty log start. Per Intel
> >Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry.
> >It enables Accessed/Dirty Flags in second-level paging entries.
> >So, a generic iommu interface here is better. For Intel iommu, it
> >enables SLADE. For ARM, it splits block.
> 
> From a quick look, VT-D's SLADE and SMMU's HTTU appear to be the
> exact same thing. This step isn't about enabling or disabling that
> feature itself (the proposal for SMMU is to simply leave HTTU
> enabled all the time), it's about controlling the granularity at
> which the dirty status can be detected/reported at all, since that's
> tied to the pagetable structure.
> 
> However, if an IOMMU were to come along with some other way of
> reporting dirty status that didn't depend on the granularity of
> individual mappings, then indeed it wouldn't need this operation.
> 
Per my thought, we can use these two start/stop interfaces to make
user space decide when to start/stop the dirty tracking. For Intel
SLADE, I think we can enable this bit when this start interface is
called by user space. I don't think leave SLADE enabled all the time
is necessary for Intel Vt-d. So I suggest a generic interface here.
Thanks!

> Robin.
> 
> >>+
> >>+static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu,
> >>+   struct vfio_dma *dma)
> >>+{
> >>+   struct vfio_domain *d;
> >>+
> >>+   list_for_each_entry(d, &iommu->domain_list, next) {
> >>+   /* Go through all domain anyway even if we fail */
> >>+   iommu_merge_page(d->domain, dma->iova, dma->size,
> >>+d->prot | dma->prot);
> >>+   }
> >>+}
> >
> >Same as above comment, a generic interface is required here.
> >
> >>+
> >>+static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool 
> >>start)
> >>+{
> >>+   struct rb_node *n;
> >>+
> >>+   /* Split and merge even if all iommu don't support HWDBM now */
> >>+   for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> >>+   struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >>+
> >>+   if (!dma->iommu_mapped)
> >>+   continue;
> >>+
> >>+   /* Go through all dma range anyway even if we fail */
> >>+   if (start)
> >>+   vfio_dma_dirty_log_start(iommu, dma);
> >>+   else
> >>+   vfio_dma_dirty_log_stop(iommu, dma);
> >>+   }
> >>+}
> >>+
> >>  static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu,
> >>unsigned long arg)
> >>  {
> >>@@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct 
> >>vfio_iommu *iommu,
> >>pgsize = 1 << __ffs(iommu->pgsize_bitmap);
> >>if (!iommu->dirty_page_tracking) {
> >>ret = vfio_dma_bitmap_alloc_all(iommu, pgsize);
> >>-   if (!ret)
> >>+   if (!ret) {
> >>iommu->dirty_page_tracking = true;
> >>+   vfio_iommu_dirty_log_switch(iommu, true);
> >>+   }
> >>}
> >>mutex_unlock(&iommu->lock);
> >>return ret;
> >>@@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct 
> >>vfio_iommu *iommu,
> >>if (iommu->dirty_page_tracking) {
> >>iommu->dirty_page_tracking = false;
> >>vfio_dma_bitmap_free_all(iommu);
> >>+   vfio_iommu_dirty_log_switch(iommu, false);
> >>}
> >>mutex_unlock(&iommu->lock);
> >>return 0;
> >>-- 
> >>2.19.1
> >___
> >iommu mailing list
> >iommu@lists.linux-foundation.org
> >https://lists.linuxfoundation.org/mailman/listinfo/iommu
> >
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 04/11] iommu/arm-smmu-v3: Split block descriptor when start dirty log

2021-03-16 Thread Yi Sun
On 21-03-10 17:06:07, Keqian Zhu wrote:
> From: jiangkunkun 
> 
> Block descriptor is not a proper granule for dirty log tracking.
> Take an extreme example, if DMA writes one byte, under 1G mapping,
> the dirty amount reported to userspace is 1G, but under 4K mapping,
> the dirty amount is just 4K.
> 
> This adds a new interface named start_dirty_log in iommu layer and
> arm smmuv3 implements it, which splits block descriptor to an span
> of page descriptors. Other types of IOMMU will perform architecture
> specific actions to start dirty log.
> 
> To allow code reuse, the split_block operation is realized as an
> iommu_ops too. We flush all iotlbs after the whole procedure is
> completed to ease the pressure of iommu, as we will hanle a huge
> range of mapping in general.
> 
> Spliting block does not simultaneously work with other pgtable ops,
> as the only designed user is vfio, which always hold a lock, so race
> condition is not considered in the pgtable ops.
> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Kunkun Jiang 
> ---
> 
> changelog:
> 
> v2:
>  - Change the return type of split_block(). size_t -> int.
>  - Change commit message to properly describe race condition. (Robin)
>  - Change commit message to properly describe the need of split block.
>  - Add a new interface named start_dirty_log(). (Sun Yi)
>  - Change commit message to explain the realtionship of split_block() and 
> start_dirty_log().
> 
> ---
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c |  52 +
>  drivers/iommu/io-pgtable-arm.c  | 122 
>  drivers/iommu/iommu.c   |  48 
>  include/linux/io-pgtable.h  |   2 +
>  include/linux/iommu.h   |  24 
>  5 files changed, 248 insertions(+)
> 
Could you please split iommu common interface to a separate patch?
This may make review and comments easier.

IMHO, I think the start/stop interfaces could be merged into one, e.g:
int iommu_domain_set_hwdbm(struct iommu_domain *domain, bool enable,
           unsigned long iova, size_t size,
   int prot);

Same comments to patch 5.

BRs,
Yi Sun

> -- 
> 2.19.1
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 04/11] iommu/arm-smmu-v3: Split block descriptor when start dirty log

2021-03-16 Thread Yi Sun
On 21-03-16 19:39:47, Keqian Zhu wrote:
> Hi Yi,
> 
> On 2021/3/16 17:17, Yi Sun wrote:
> > On 21-03-10 17:06:07, Keqian Zhu wrote:
> >> From: jiangkunkun 
> >>
> >> Block descriptor is not a proper granule for dirty log tracking.
> >> Take an extreme example, if DMA writes one byte, under 1G mapping,
> >> the dirty amount reported to userspace is 1G, but under 4K mapping,
> >> the dirty amount is just 4K.
> >>
> >> This adds a new interface named start_dirty_log in iommu layer and
> >> arm smmuv3 implements it, which splits block descriptor to an span
> >> of page descriptors. Other types of IOMMU will perform architecture
> >> specific actions to start dirty log.
> >>
> >> To allow code reuse, the split_block operation is realized as an
> >> iommu_ops too. We flush all iotlbs after the whole procedure is
> >> completed to ease the pressure of iommu, as we will hanle a huge
> >> range of mapping in general.
> >>
> >> Spliting block does not simultaneously work with other pgtable ops,
> >> as the only designed user is vfio, which always hold a lock, so race
> >> condition is not considered in the pgtable ops.
> >>
> >> Co-developed-by: Keqian Zhu 
> >> Signed-off-by: Kunkun Jiang 
> >> ---
> >>
> >> changelog:
> >>
> >> v2:
> >>  - Change the return type of split_block(). size_t -> int.
> >>  - Change commit message to properly describe race condition. (Robin)
> >>  - Change commit message to properly describe the need of split block.
> >>  - Add a new interface named start_dirty_log(). (Sun Yi)
> >>  - Change commit message to explain the realtionship of split_block() and 
> >> start_dirty_log().
> >>
> >> ---
> >>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c |  52 +
> >>  drivers/iommu/io-pgtable-arm.c  | 122 
> >>  drivers/iommu/iommu.c   |  48 
> >>  include/linux/io-pgtable.h  |   2 +
> >>  include/linux/iommu.h   |  24 
> >>  5 files changed, 248 insertions(+)
> >>
> > Could you please split iommu common interface to a separate patch?
> > This may make review and comments easier.
> Yup, good suggestion.
> 
> > 
> > IMHO, I think the start/stop interfaces could be merged into one, e.g:
> > int iommu_domain_set_hwdbm(struct iommu_domain *domain, bool enable,
> >unsigned long iova, size_t size,
> >int prot);
> Looks good, this reduces some code. but I have a concern that this causes 
> loss of flexibility,
> as we must pass same arguments when start|stop dirty log. What's your opinion 
> about this?
> 
Per current design, start/stop interfaces have similar arguments. So I
think it is ok for now. For future extension, we may think to define a
structure to pass these arguments.

> > 
> > Same comments to patch 5.
> OK. Thanks.
> 
> > 
> > BRs,
> > Yi Sun
> > 
> >> -- 
> >> 2.19.1
> > .
> Thanks,
> Keqian

BRs,
Yi Sun
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 06/11] iommu/arm-smmu-v3: Scan leaf TTD to sync hardware dirty log

2021-03-17 Thread Yi Sun
On 21-03-10 17:06:09, Keqian Zhu wrote:
> From: jiangkunkun 
> 
> During dirty log tracking, user will try to retrieve dirty log from
> iommu if it supports hardware dirty log.
> 
> This adds a new interface named sync_dirty_log in iommu layer and
> arm smmuv3 implements it, which scans leaf TTD and treats it's dirty
> if it's writable (As we just enable HTTU for stage1, so check whether
> AP[2] is not set).
> 
> Co-developed-by: Keqian Zhu 
> Signed-off-by: Kunkun Jiang 
> ---
> 
> changelog:
> 
> v2:
>  - Add new sanity check in arm_smmu_sync_dirty_log(). (smmu_domain->stage != 
> ARM_SMMU_DOMAIN_S1)
>  - Document the purpose of flush_iotlb in arm_smmu_sync_dirty_log(). (Robin)
>  
> ---
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 30 +++
>  drivers/iommu/io-pgtable-arm.c  | 90 +
>  drivers/iommu/iommu.c   | 38 +
>  include/linux/io-pgtable.h  |  4 +
>  include/linux/iommu.h   | 18 +
>  5 files changed, 180 insertions(+)
> 
Please split iommu common interface out. Thanks!

[...]

> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 2a10294b62a3..44dfb78f9050 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2850,6 +2850,44 @@ int iommu_stop_dirty_log(struct iommu_domain *domain, 
> unsigned long iova,
>  }
>  EXPORT_SYMBOL_GPL(iommu_stop_dirty_log);
>  
> +int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova,
> +  size_t size, unsigned long *bitmap,
> +  unsigned long base_iova, unsigned long bitmap_pgshift)

One open question: shall we add PASID as one parameter to make iommu
know which address space to visit?

For live migration, the pasid should not be necessary. But considering
future extension, it may be required.

BRs,
Yi Sun
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v1 0/3] iommu: Fix a few issues related to PRQ

2020-10-27 Thread Yi Sun
We found a few issues about PRQ. So, three patches are cooked to
fix them. Please have a review. Thanks!

Jacob Pan (1):
  iommu: Fix an issue in iommu_page_response() flags check

Liu Yi L (1):
  iommu/vt-d: Fix prq reporting issues

Liu, Yi L (1):
  iommu/vt-d: Fix a bug for PDP check in prq_event_thread

 drivers/iommu/intel/svm.c | 5 -
 drivers/iommu/iommu.c | 6 --
 2 files changed, 8 insertions(+), 3 deletions(-)

-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v1 3/3] iommu/vt-d: Fix a bug for PDP check in prq_event_thread

2020-10-27 Thread Yi Sun
From: "Liu, Yi L" 

Signed-off-by: Liu, Yi L 
Signed-off-by: Yi Sun 
---
 drivers/iommu/intel/svm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index 75d9dc9..1870248 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -1032,7 +1032,7 @@ static irqreturn_t prq_event_thread(int irq, void *d)
resp.qw0 = QI_PGRP_PASID(req->pasid) |
QI_PGRP_DID(req->rid) |
QI_PGRP_PASID_P(req->pasid_present) |
-   QI_PGRP_PDP(req->pasid_present) |
+   QI_PGRP_PDP(req->priv_data_present) |
QI_PGRP_RESP_CODE(result) |
QI_PGRP_RESP_TYPE;
resp.qw1 = QI_PGRP_IDX(req->prg_index) |
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v1 1/3] iommu/vt-d: Fix prq reporting issues

2020-10-27 Thread Yi Sun
From: Liu Yi L 

Should get correct sid and set it into sdev. Because we execute
'sdev->sid != req->rid' in the loop of prq_event_thread().

Signed-off-by: Liu Yi L 
Signed-off-by: Yi Sun 
---
 drivers/iommu/intel/svm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index f1861fa..75d9dc9 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -277,6 +277,7 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, 
struct device *dev,
  struct iommu_gpasid_bind_data *data)
 {
struct intel_iommu *iommu = device_to_iommu(dev, NULL, NULL);
+   struct device_domain_info *info;
struct intel_svm_dev *sdev = NULL;
struct dmar_domain *dmar_domain;
struct intel_svm *svm = NULL;
@@ -357,6 +358,8 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, 
struct device *dev,
goto out;
}
sdev->dev = dev;
+   info = get_domain_info(dev);
+   sdev->sid = PCI_DEVID(info->bus, info->devfn);
 
/* Only count users if device has aux domains */
if (iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX))
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v1 2/3] iommu: Fix an issue in iommu_page_response() flags check

2020-10-27 Thread Yi Sun
From: Jacob Pan 

original code fails when LAST_PAGE is set in flags.

Signed-off-by: Jacob Pan 
Signed-off-by: Liu Yi L 
Signed-off-by: Yi Sun 
---
 drivers/iommu/iommu.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 8c470f4..053cec3 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1200,9 +1200,11 @@ int iommu_page_response(struct device *dev,
return -EINVAL;
 
if (msg->version != IOMMU_PAGE_RESP_VERSION_1 ||
-   msg->flags & ~IOMMU_PAGE_RESP_PASID_VALID)
+   !(msg->flags & IOMMU_PAGE_RESP_PASID_VALID)) {
+   dev_warn_ratelimited(dev, "%s:Invalid ver %x: flags %x\n",
+   __func__, msg->version, msg->flags);
return -EINVAL;
-
+   }
/* Only send response if there is a fault report pending */
mutex_lock(¶m->fault_param->lock);
if (list_empty(¶m->fault_param->faults)) {
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 1/3] iommu/vt-d: Fix prq reporting issues

2020-10-27 Thread Yi Sun
Thanks! Will do the modifications according to your comments in next
version.

On 20-10-28 12:50:37, Lu Baolu wrote:
> Hi Yi,
> 
> On 10/28/20 9:36 AM, Yi Sun wrote:
> >From: Liu Yi L 
> >
> >Should get correct sid and set it into sdev. Because we execute
> >'sdev->sid != req->rid' in the loop of prq_event_thread().
> 
> How about making the title more accurate, how about something like:
> 
> iommu/vt-d: Fix use before set issue in intel_svm_bind_gpasid()
> 
> >
> >Signed-off-by: Liu Yi L 
> >Signed-off-by: Yi Sun 
> >---
> >  drivers/iommu/intel/svm.c | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> >diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
> >index f1861fa..75d9dc9 100644
> >--- a/drivers/iommu/intel/svm.c
> >+++ b/drivers/iommu/intel/svm.c
> >@@ -277,6 +277,7 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, 
> >struct device *dev,
> >   struct iommu_gpasid_bind_data *data)
> >  {
> > struct intel_iommu *iommu = device_to_iommu(dev, NULL, NULL);
> >+struct device_domain_info *info;
> > struct intel_svm_dev *sdev = NULL;
> > struct dmar_domain *dmar_domain;
> > struct intel_svm *svm = NULL;
> >@@ -357,6 +358,8 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, 
> >struct device *dev,
> > goto out;
> > }
> > sdev->dev = dev;
> >+info = get_domain_info(dev);
> >+sdev->sid = PCI_DEVID(info->bus, info->devfn);
> > /* Only count users if device has aux domains */
> > if (iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX))
> >
> 
> How about moving get_domain_info() up to the sanity check part?
> 
> diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
> index f1861fa3d0e4..75846692f2f2 100644
> --- a/drivers/iommu/intel/svm.c
> +++ b/drivers/iommu/intel/svm.c
> @@ -279,6 +279,7 @@ int intel_svm_bind_gpasid(struct iommu_domain
> *domain, struct device *dev,
> struct intel_iommu *iommu = device_to_iommu(dev, NULL, NULL);
> struct intel_svm_dev *sdev = NULL;
> struct dmar_domain *dmar_domain;
> +   struct device_domain_info *info;
> struct intel_svm *svm = NULL;
> int ret = 0;
> 
> @@ -310,6 +311,10 @@ int intel_svm_bind_gpasid(struct iommu_domain
> *domain, struct device *dev,
> if (data->hpasid <= 0 || data->hpasid >= PASID_MAX)
> return -EINVAL;
> 
> +   info = get_domain_info(dev);
> +   if (!info)
> +   return -EINVAL;
> +
> dmar_domain = to_dmar_domain(domain);
> 
> mutex_lock(&pasid_mutex);
> @@ -357,6 +362,7 @@ int intel_svm_bind_gpasid(struct iommu_domain
> *domain, struct device *dev,
> goto out;
> }
> sdev->dev = dev;
> +   sdev->sid = PCI_DEVID(info->bus, info->devfn);
> 
> /* Only count users if device has aux domains */
> if (iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX))
> 
> Best regards,
> baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 3/3] iommu/vt-d: Fix a bug for PDP check in prq_event_thread

2020-10-27 Thread Yi Sun
On 20-10-28 13:05:05, Lu Baolu wrote:
> Hi Yi,
> 
> On 10/28/20 9:36 AM, Yi Sun wrote:
> >From: "Liu, Yi L" 
> 
> Can you please add some description here? How far should this patch back
> ported? A Fixes tag?
> 
Sure. Will add description and Fixes tag.

> Best regards,
> baolu
> 
> >
> >Signed-off-by: Liu, Yi L 
> >Signed-off-by: Yi Sun 
> >---
> >  drivers/iommu/intel/svm.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> >diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
> >index 75d9dc9..1870248 100644
> >--- a/drivers/iommu/intel/svm.c
> >+++ b/drivers/iommu/intel/svm.c
> >@@ -1032,7 +1032,7 @@ static irqreturn_t prq_event_thread(int irq, void *d)
> > resp.qw0 = QI_PGRP_PASID(req->pasid) |
> > QI_PGRP_DID(req->rid) |
> > QI_PGRP_PASID_P(req->pasid_present) |
> >-QI_PGRP_PDP(req->pasid_present) |
> >+QI_PGRP_PDP(req->priv_data_present) |
> > QI_PGRP_RESP_CODE(result) |
> > QI_PGRP_RESP_TYPE;
> > resp.qw1 = QI_PGRP_IDX(req->prg_index) |
> >
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 2/3] iommu: Fix an issue in iommu_page_response() flags check

2020-10-28 Thread Yi Sun
On 20-10-28 10:13:56, Jean-Philippe Brucker wrote:
> Hi,
> 
> On Wed, Oct 28, 2020 at 09:36:57AM +0800, Yi Sun wrote:
> > From: Jacob Pan 
> > 
> > original code fails when LAST_PAGE is set in flags.
> 
> LAST_PAGE is not documented to be a valid flags for page_response.
> So isn't failing the right thing to do?
> 
> > 
> > Signed-off-by: Jacob Pan 
> > Signed-off-by: Liu Yi L 
> > Signed-off-by: Yi Sun 
> > ---
> >  drivers/iommu/iommu.c | 6 --
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> > index 8c470f4..053cec3 100644
> > --- a/drivers/iommu/iommu.c
> > +++ b/drivers/iommu/iommu.c
> > @@ -1200,9 +1200,11 @@ int iommu_page_response(struct device *dev,
> > return -EINVAL;
> >  
> > if (msg->version != IOMMU_PAGE_RESP_VERSION_1 ||
> > -   msg->flags & ~IOMMU_PAGE_RESP_PASID_VALID)
> > +   !(msg->flags & IOMMU_PAGE_RESP_PASID_VALID)) {
> 
> It should be OK not to have PASID_VALID set, we're just checking for
> undefined flags here.
> 
Thanks! You are right. Per published spec, we should not care LAST_PAGE
for page_response. I will remove this patch in next version.
 
> Thanks,
> Jean
> 
> > +   dev_warn_ratelimited(dev, "%s:Invalid ver %x: flags %x\n",
> > +   __func__, msg->version, msg->flags);
> > return -EINVAL;
> > -
> > +   }
> > /* Only send response if there is a fault report pending */
> > mutex_lock(¶m->fault_param->lock);
> > if (list_empty(¶m->fault_param->faults)) {
> > -- 
> > 2.7.4
> > 
> > ___
> > iommu mailing list
> > iommu@lists.linux-foundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/iommu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 0/2] iommu: Fix a few issues related to PRQ

2020-10-29 Thread Yi Sun
We found a few issues about PRQ. So, two patches are cooked to
fix them. Please have a review. Thanks!

Changes from v1:

- Modify subject of patch 1 to make it more accurate.
- Move get_domain_info() up to the sanity check part in patch 1.
- Remove v1 patch 2 which is not suitable.
- Add description for current patch 2.
- Add 'Fixes:' tags for all patches.

Liu Yi L (1):
  iommu/vt-d: Fix sid not set issue in in intel_svm_bind_gpasid()

Liu, Yi L (1):
  iommu/vt-d: Fix a bug for PDP check in prq_event_thread

 drivers/iommu/intel/svm.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 2/2] iommu/vt-d: Fix a bug for PDP check in prq_event_thread

2020-10-29 Thread Yi Sun
From: "Liu, Yi L" 

In prq_event_thread(), the QI_PGRP_PDP is wrongly set by
'req->pasid_present' which should be replaced to
'req->priv_data_present'.

Fixes: 5b438f4ba315 ("iommu/vt-d: Support page request in scalable mode")
Signed-off-by: Liu, Yi L 
Signed-off-by: Yi Sun 
---
 drivers/iommu/intel/svm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index 7584669..3242ebd 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -1035,7 +1035,7 @@ static irqreturn_t prq_event_thread(int irq, void *d)
resp.qw0 = QI_PGRP_PASID(req->pasid) |
QI_PGRP_DID(req->rid) |
QI_PGRP_PASID_P(req->pasid_present) |
-   QI_PGRP_PDP(req->pasid_present) |
+   QI_PGRP_PDP(req->priv_data_present) |
QI_PGRP_RESP_CODE(result) |
QI_PGRP_RESP_TYPE;
resp.qw1 = QI_PGRP_IDX(req->prg_index) |
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 1/2] iommu/vt-d: Fix sid not set issue in intel_svm_bind_gpasid()

2020-10-29 Thread Yi Sun
From: Liu Yi L 

Should get correct sid and set it into sdev. Because we execute
'sdev->sid != req->rid' in the loop of prq_event_thread().

Fixes: eb8d93ea3c1d ("iommu/vt-d: Report page request faults for guest SVA")
Signed-off-by: Liu Yi L 
Signed-off-by: Yi Sun 
---
 drivers/iommu/intel/svm.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index f1861fa..7584669 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -279,6 +279,7 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, 
struct device *dev,
struct intel_iommu *iommu = device_to_iommu(dev, NULL, NULL);
struct intel_svm_dev *sdev = NULL;
struct dmar_domain *dmar_domain;
+   struct device_domain_info *info;
struct intel_svm *svm = NULL;
int ret = 0;
 
@@ -310,6 +311,10 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, 
struct device *dev,
if (data->hpasid <= 0 || data->hpasid >= PASID_MAX)
return -EINVAL;
 
+   info = get_domain_info(dev);
+   if (!info)
+   return -EINVAL;
+
dmar_domain = to_dmar_domain(domain);
 
mutex_lock(&pasid_mutex);
@@ -357,6 +362,7 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, 
struct device *dev,
goto out;
}
sdev->dev = dev;
+   sdev->sid = PCI_DEVID(info->bus, info->devfn);
 
/* Only count users if device has aux domains */
if (iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX))
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu