Christophe Leroy <christophe.le...@csgroup.eu> writes: > Le 13/10/2021 à 23:34, Joel Stanley a écrit : >> The page_alloc.c code will call into __kernel_map_pages when >> DEBUG_PAGEALLOC is configured and enabled. >> >> As the implementation assumes hash, this should crash spectacularly if >> not for a bit of luck in __kernel_map_pages. In this function >> linear_map_hash_count is always zero, the for loop exits without doing >> any damage. >> >> There are no other platforms that determine if they support >> debug_pagealloc at runtime. Instead of adding code to mm/page_alloc.c to >> do that, this change turns the map/unmap into a noop when in radix >> mode and prints a warning once. >> >> Signed-off-by: Joel Stanley <j...@jms.id.au> >> --- >> v2: Put __kernel_map_pages in pgtable.h >> >> arch/powerpc/include/asm/book3s/64/hash.h | 2 ++ >> arch/powerpc/include/asm/book3s/64/pgtable.h | 11 +++++++++++ >> arch/powerpc/include/asm/book3s/64/radix.h | 3 +++ >> arch/powerpc/mm/book3s64/hash_utils.c | 2 +- >> arch/powerpc/mm/book3s64/radix_pgtable.c | 7 +++++++ >> 5 files changed, 24 insertions(+), 1 deletion(-) >> >> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h >> b/arch/powerpc/include/asm/book3s/64/hash.h >> index d959b0195ad9..674fe0e890dc 100644 >> --- a/arch/powerpc/include/asm/book3s/64/hash.h >> +++ b/arch/powerpc/include/asm/book3s/64/hash.h >> @@ -255,6 +255,8 @@ int hash__create_section_mapping(unsigned long start, >> unsigned long end, >> int nid, pgprot_t prot); >> int hash__remove_section_mapping(unsigned long start, unsigned long end); >> >> +void hash__kernel_map_pages(struct page *page, int numpages, int enable); >> + >> #endif /* !__ASSEMBLY__ */ >> #endif /* __KERNEL__ */ >> #endif /* _ASM_POWERPC_BOOK3S_64_HASH_H */ >> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h >> b/arch/powerpc/include/asm/book3s/64/pgtable.h >> index 5d34a8646f08..265661ded238 100644 >> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h >> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h >> @@ -1101,6 +1101,17 @@ static inline void vmemmap_remove_mapping(unsigned >> long start, >> } >> #endif >> >> +#ifdef CONFIG_DEBUG_PAGEALLOC >> +static inline void __kernel_map_pages(struct page *page, int numpages, int >> enable) >> +{ >> + if (radix_enabled()) { >> + radix__kernel_map_pages(page, numpages, enable); >> + return; >> + } >> + hash__kernel_map_pages(page, numpages, enable); > > I'd have prefered something like below > > if (radix_enabled()) > radix__kernel_map_pages(page, numpages, enable); > else > hash__kernel_map_pages(page, numpages, enable);
I did that when applying. cheers