If we fail with a allocated hugepage, it is hard to recover properly. One such example is reserve count. We don't have any method to recover reserve count. Although, I will introduce a function to recover reserve count in following patch, it is better not to allocate a hugepage as much as possible. So move up anon_vma_prepare() which can be failed in OOM situation.
Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com> diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 683fd38..bb8a45f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2536,6 +2536,15 @@ retry_avoidcopy: /* Drop page_table_lock as buddy allocator may be called */ spin_unlock(&mm->page_table_lock); + /* + * When the original hugepage is shared one, it does not have + * anon_vma prepared. + */ + if (unlikely(anon_vma_prepare(vma))) { + ret = VM_FAULT_OOM; + goto out_old_page; + } + use_reserve = vma_has_reserves(h, vma, address); if (use_reserve == -ENOMEM) { ret = VM_FAULT_OOM; @@ -2590,15 +2599,6 @@ retry_avoidcopy: goto out_lock; } - /* - * When the original hugepage is shared one, it does not have - * anon_vma prepared. - */ - if (unlikely(anon_vma_prepare(vma))) { - ret = VM_FAULT_OOM; - goto out_new_page; - } - copy_user_huge_page(new_page, old_page, address, vma, pages_per_huge_page(h)); __SetPageUptodate(new_page); @@ -2625,7 +2625,6 @@ retry_avoidcopy: spin_unlock(&mm->page_table_lock); mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); -out_new_page: page_cache_release(new_page); out_old_page: page_cache_release(old_page); -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/