A few hugetlb allocators loop while calling the page allocator and can
potentially prevent rescheduling if the page allocator slowpath is not
utilized.

Conditionally schedule when large numbers of hugepages can be allocated.

Signed-off-by: David Rientjes <rient...@google.com>
---
 Based on -mm only to prevent merge conflicts with
 "mm/hugetlb.c: warn the user when issues arise on boot due to hugepages"

 v2: removed redundant cond_resched() per Mike

 mm/hugetlb.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1754,6 +1754,7 @@ static int gather_surplus_pages(struct hstate *h, int 
delta)
                        break;
                }
                list_add(&page->lru, &surplus_list);
+               cond_resched();
        }
        allocated += i;
 
@@ -2222,6 +2223,7 @@ static void __init hugetlb_hstate_alloc_pages(struct 
hstate *h)
                } else if (!alloc_fresh_huge_page(h,
                                         &node_states[N_MEMORY]))
                        break;
+               cond_resched();
        }
        if (i < h->max_huge_pages) {
                char buf[32];

Reply via email to