With CONFIG_KASAN_VMALLOC, new page tables are created at the time shadow memory for vmalloc area in unmapped. If some parts of the page table still has entries to the zero page shadow memory, the entries are wrongly marked RW.
With CONFIG_KASAN_VMALLOC, almost the entire kernel address space is managed by KASAN. To make it simple, just create KASAN page tables for the entire kernel space at kasan_init(). That doesn't use much more space, and that's anyway already done for hash platforms. Fixes: 3d4247fcc938 ("powerpc/32: Add support of KASAN_VMALLOC") Signed-off-by: Christophe Leroy <christophe.le...@c-s.fr> --- v3: Split a too long line v2: Allocate all tables at init instead of doing it when unmapping vmalloc space KASAN pages. Signed-off-by: Christophe Leroy <christophe.le...@c-s.fr> --- arch/powerpc/mm/kasan/kasan_init_32.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index 1a29cf469903..cbcad369fcb2 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -120,12 +120,6 @@ static void __init kasan_unmap_early_shadow_vmalloc(void) unsigned long k_cur; phys_addr_t pa = __pa(kasan_early_shadow_page); - if (!early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) { - int ret = kasan_init_shadow_page_tables(k_start, k_end); - - if (ret) - panic("kasan: kasan_init_shadow_page_tables() failed"); - } for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur); pte_t *ptep = pte_offset_kernel(pmd, k_cur); @@ -143,7 +137,8 @@ void __init kasan_mmu_init(void) int ret; struct memblock_region *reg; - if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) { + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) || + IS_ENABLED(CONFIG_KASAN_VMALLOC)) { ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END); if (ret) -- 2.25.0