There's an off-by-one bug in function __domain_mapping(), which may trigger the BUG_ON(nr_pages < lvl_pages) when (nr_pages + 1) & superpage_mask == 0
The issue was introduced by commit 9051aa0268dc "intel-iommu: Combine domain_pfn_mapping() and domain_sg_mapping()", which sets sg_res to "nr_pages + 1" to avoid some of the 'sg_res==0' code paths. It's safe to remove extra "+1" because sg_res is only used to calculate page size now. Reported-And-Tested-by: Sudeep Dutt <sudeep.d...@intel.com> Signed-off-by: Jiang Liu <jiang....@linux.intel.com> Cc: <sta...@vger.kernel.org> # 3.1 --- Hi David and Joerg, This issue was introduced in v2.6.31, but intel-iommu.c has been moved into drivers/iommu in v3.1. So what's the preferred way to deal with stable kernels between v2.6.31 and v3.1? Thanks! Gerry --- drivers/iommu/intel-iommu.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index a27d6cb1a793..b26ad10ec697 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -1983,7 +1983,7 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn, { struct dma_pte *first_pte = NULL, *pte = NULL; phys_addr_t uninitialized_var(pteval); - unsigned long sg_res; + unsigned long sg_res = 0; unsigned int largepage_lvl = 0; unsigned long lvl_pages = 0; @@ -1994,10 +1994,8 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn, prot &= DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP; - if (sg) - sg_res = 0; - else { - sg_res = nr_pages + 1; + if (!sg) { + sg_res = nr_pages; pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | prot; } -- 1.7.10.4 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu