follow_pmd_mask() splits a huge PMD when FOLL_SPLIT_PMD is set, so GUP
can pin individual pages at PTE granularity.

If the split fails, the PMD is still huge and follow_page_pte() cannot
process it. Return ERR_PTR(-ENOMEM) on split failure, which causes the
GUP caller to get -ENOMEM. -ENOMEM is already returned in follow_pmd_mask
if pte_alloc_one fails (which is the reason why split_huge_pmd could
fail), hence this is a safe change.

Signed-off-by: Usama Arif <[email protected]>
---
 mm/gup.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/gup.c b/mm/gup.c
index 8e7dc2c6ee738..792b2e7319dd0 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -928,8 +928,16 @@ static struct page *follow_pmd_mask(struct vm_area_struct 
*vma,
                return follow_page_pte(vma, address, pmd, flags);
        }
        if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) {
+               int ret;
+
                spin_unlock(ptl);
-               split_huge_pmd(vma, pmd, address);
+               /*
+                * If split fails, the PMD is still huge and
+                * we cannot proceed to follow_page_pte.
+                */
+               ret = split_huge_pmd(vma, pmd, address);
+               if (ret)
+                       return ERR_PTR(ret);
                /* If pmd was left empty, stuff a page table in there quickly */
                return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) :
                        follow_page_pte(vma, address, pmd, flags);
-- 
2.47.3


Reply via email to