On 12/9/19 2:53 PM, John Hubbard wrote:
...
> @@ -212,10 +211,9 @@ static void mm_iommu_unpin(struct 
> mm_iommu_table_group_mem_t *mem)
>               if (!page)
>                       continue;
>  
> -             if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY)
> -                     SetPageDirty(page);
> +             put_user_pages_dirty_lock(&page, 1,
> +                             mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY);
>  
> -             put_page(page);


Correction: this is somehow missing the fixes that resulted from Jan Kara's 
review (he
noted that we can't take a page lock in this context). I must have picked up 
the 
wrong version of it, when I rebased for -rc1.

Will fix in the next version (including the commit description). Here's what the
corrected hunk will look like:

@@ -215,7 +214,8 @@ static void mm_iommu_unpin(struct 
mm_iommu_table_group_mem_t *mem)
                if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY)
                        SetPageDirty(page);
 
-               put_page(page);
+               put_user_page(page);
+
                mem->hpas[i] = 0;
        }
 }


thanks,
-- 
John Hubbard
NVIDIA

Reply via email to