3.16.61-rc1 review patch. If anyone has any objections, please let me know.
------------------ From: Cannon Matthews <cannonmatth...@google.com> commit 520495fe96d74e05db585fc748351e0504d8f40d upstream. When booting with very large numbers of gigantic (i.e. 1G) pages, the operations in the loop of gather_bootmem_prealloc, and specifically prep_compound_gigantic_page, takes a very long time, and can cause a softlockup if enough pages are requested at boot. For example booting with 3844 1G pages requires prepping (set_compound_head, init the count) over 1 billion 4K tail pages, which takes considerable time. Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to prevent this lockup. Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and no softlockup is reported, and the hugepages are reported as successfully setup. Link: http://lkml.kernel.org/r/20180627214447.260804-1-cannonmatth...@google.com Signed-off-by: Cannon Matthews <cannonmatth...@google.com> Reviewed-by: Andrew Morton <a...@linux-foundation.org> Reviewed-by: Mike Kravetz <mike.krav...@oracle.com> Acked-by: Michal Hocko <mho...@suse.com> Cc: Andres Lagar-Cavilla <andre...@google.com> Cc: Peter Feiner <pfei...@google.com> Cc: Greg Thelen <gthe...@google.com> Signed-off-by: Andrew Morton <a...@linux-foundation.org> Signed-off-by: Linus Torvalds <torva...@linux-foundation.org> Signed-off-by: Ben Hutchings <b...@decadent.org.uk> --- mm/hugetlb.c | 1 + 1 file changed, 1 insertion(+) --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1546,6 +1546,7 @@ static void __init gather_bootmem_preall */ if (hstate_is_gigantic(h)) adjust_managed_page_count(page, 1 << h->order); + cond_resched(); } }