Joonsoo Kim <iamjoonsoo....@lge.com> writes:

> alloc_huge_page_node() use dequeue_huge_page_node() without
> any validation check, so it can steal reserved page unconditionally.
> To fix it, check the number of free_huge_page in
> alloc_huge_page_node().


May be we should say. Don't use the reserve pool when soft offlining a huge
page. Check we have free pages outside the reserve pool before we
dequeue the huge page 

Reviewed-by: Aneesh Kumar <aneesh.ku...@linux.vnet.ibm.com>


>
> Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com>
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6782b41..d971233 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -935,10 +935,11 @@ static struct page *alloc_buddy_huge_page(struct hstate 
> *h, int nid)
>   */
>  struct page *alloc_huge_page_node(struct hstate *h, int nid)
>  {
> -     struct page *page;
> +     struct page *page = NULL;
>
>       spin_lock(&hugetlb_lock);
> -     page = dequeue_huge_page_node(h, nid);
> +     if (h->free_huge_pages - h->resv_huge_pages > 0)
> +             page = dequeue_huge_page_node(h, nid);
>       spin_unlock(&hugetlb_lock);
>
>       if (!page)
> -- 
> 1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to