The iommu_deferred_attach() function invokes __iommu_attach_device() while
not holding the group->mutex, like other __iommu_attach_device() callers.

Though there is no pratical bug being triggered so far, it would be better
to apply the same locking to this __iommu_attach_device(), since the IOMMU
drivers nowaday are more aware of the group->mutex -- some of them use the
iommu_group_mutex_assert() function that could be potentially in the path
of an attach_dev callback function invoked by the __iommu_attach_device().

The iommu_deferred_attach() will soon need to verify a new flag stored in
the struct group_device. To iterate the gdev list, the group->mutex should
be held for this matter too.

So, grab the mutex to guard __iommu_attach_device() like other callers.

Reviewed-by: Jason Gunthorpe <j...@nvidia.com>
Signed-off-by: Nicolin Chen <nicol...@nvidia.com>
---
 drivers/iommu/iommu.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 060ebe330ee16..1e0116bce0762 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2144,10 +2144,17 @@ EXPORT_SYMBOL_GPL(iommu_attach_device);
 
 int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
 {
-       if (dev->iommu && dev->iommu->attach_deferred)
-               return __iommu_attach_device(domain, dev);
+       /*
+        * This is called on the dma mapping fast path so avoid locking. This is
+        * racy, but we have an expectation that the driver will setup its DMAs
+        * inside probe while being single threaded to avoid racing.
+        */
+       if (!dev->iommu || !dev->iommu->attach_deferred)
+               return 0;
 
-       return 0;
+       guard(mutex)(&dev->iommu_group->mutex);
+
+       return __iommu_attach_device(domain, dev);
 }
 
 void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
-- 
2.43.0


Reply via email to