> > Currently at the beginning of hugetlb_fault(), we call huge_pte_offset() > and check whether the obtained *ptep is a migration/hwpoison entry or not. > And if not, then we get to call huge_pte_alloc(). This is racy because the > *ptep could turn into migration/hwpoison entry after the huge_pte_offset() > check. This race results in BUG_ON in huge_pte_alloc(). > > We don't have to call huge_pte_alloc() when the huge_pte_offset() returns > non-NULL, so let's fix this bug with moving the code into else block. > > Note that the *ptep could turn into a migration/hwpoison entry after > this block, but that's not a problem because we have another !pte_present > check later (we never go into hugetlb_no_page() in that case.) > > Fixes: 290408d4a250 ("hugetlb: hugepage migration core") > Signed-off-by: Naoya Horiguchi <n-horigu...@ah.jp.nec.com> > Cc: <sta...@vger.kernel.org> [2.6.36+] > ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com> > mm/hugetlb.c | 8 ++++---- > 1 files changed, 4 insertions(+), 4 deletions(-) > > diff --git next-20151123/mm/hugetlb.c next-20151123_patched/mm/hugetlb.c > index 1101ccd..6ad5e91 100644 > --- next-20151123/mm/hugetlb.c > +++ next-20151123_patched/mm/hugetlb.c > @@ -3696,12 +3696,12 @@ int hugetlb_fault(struct mm_struct *mm, struct > vm_area_struct *vma, > } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) > return VM_FAULT_HWPOISON_LARGE | > VM_FAULT_SET_HINDEX(hstate_index(h)); > + } else { > + ptep = huge_pte_alloc(mm, address, huge_page_size(h)); > + if (!ptep) > + return VM_FAULT_OOM; > } > > - ptep = huge_pte_alloc(mm, address, huge_page_size(h)); > - if (!ptep) > - return VM_FAULT_OOM; > - > mapping = vma->vm_file->f_mapping; > idx = vma_hugecache_offset(h, vma, address); > > -- > 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/