Currently the function 'follow_page_mask' does not take into account PGD based huge page implementation. This change achieves that and makes it complete.
Signed-off-by: Anshuman Khandual <khand...@linux.vnet.ibm.com> --- mm/gup.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/gup.c b/mm/gup.c index 7bf19ff..53a2013 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -232,6 +232,12 @@ struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) return no_page_table(vma, flags); + if (pgd_huge(*pgd) && vma->vm_flags & VM_HUGETLB) { + page = follow_huge_pgd(mm, address, pgd, flags); + if (page) + return page; + return no_page_table(vma, flags); + } pud = pud_offset(pgd, address); if (pud_none(*pud)) -- 2.1.0 _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev