The fadump kernel boots with limited memory solely to collect the kernel core dump. Having gigantic hugepages in the fadump kernel is of no use. Many times, the fadump kernel encounters OOM (Out of Memory) issues if gigantic hugepages are allocated.
To address this, disable gigantic hugepages if fadump is active by returning early from arch_hugetlb_valid_size() using hugepages_supported(). When fadump is active, the global variable hugetlb_disabled is set to true, which is later used by the PowerPC-specific hugepages_supported() function to determine hugepage support. Returning early from arch_hugetlb_vali_size() not only disables gigantic hugepages but also avoids unnecessary hstate initialization for every hugepage size supported by the platform. kernel logs related to hugepages with this patch included: kernel argument passed: hugepagesz=1G hugepages=1 First kernel: gigantic hugepage got allocated ============================================== dmesg | grep -i "hugetlb" ------------------------- HugeTLB: registered 1.00 GiB page size, pre-allocated 1 pages HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page $ cat /proc/meminfo | grep -i "hugetlb" ------------------------------------- Hugetlb: 1048576 kB Fadump kernel: gigantic hugepage not allocated =============================================== dmesg | grep -i "hugetlb" ------------------------- [ 0.000000] HugeTLB: unsupported hugepagesz=1G [ 0.000000] HugeTLB: hugepages=1 does not follow a valid hugepagesz, ignoring [ 0.706375] HugeTLB support is disabled! [ 0.773530] hugetlbfs: disabling because there are no supported hugepage sizes $ cat /proc/meminfo | grep -i "hugetlb" ---------------------------------- <Nothing> Cc: Hari Bathini <hbath...@linux.ibm.com> Cc: Madhavan Srinivasan <ma...@linux.ibm.com> Cc: Mahesh Salgaonkar <mah...@linux.ibm.com> Cc: Michael Ellerman <m...@ellerman.id.au> Cc: Ritesh Harjani (IBM)" <ritesh.l...@gmail.com> Reviewed-by: Christophe Leroy <christophe.le...@csgroup.eu> Signed-off-by: Sourabh Jain <sourabhj...@linux.ibm.com> --- Changelog: v1: https://lore.kernel.org/all/20250121150419.1342794-1-sourabhj...@linux.ibm.com/ v2: https://lore.kernel.org/all/20250124103220.111303-1-sourabhj...@linux.ibm.com/ - disable gigantic hugepage in arch code, arch_hugetlb_valid_size() v3: https://lore.kernel.org/all/20250125104928.88881-1-sourabhj...@linux.ibm.com/ - Do not modify the initialization of the shift variable v4: - Update commit message to include how hugepages_supported() detects hugepages support when fadump is active - Add Reviewed-by tag - NO functional change --- arch/powerpc/mm/hugetlbpage.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 6b043180220a..88cfd182db4e 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -138,6 +138,9 @@ bool __init arch_hugetlb_valid_size(unsigned long size) int shift = __ffs(size); int mmu_psize; + if (!hugepages_supported()) + return false; + /* Check that it is a page size supported by the hardware and * that it fits within pagetable and slice limits. */ if (size <= PAGE_SIZE || !is_power_of_2(size)) -- 2.48.1