Christophe Leroy <christophe.le...@csgroup.eu> writes: > Le 31/07/2024 à 09:56, Ritesh Harjani (IBM) a écrit : >> [Vous ne recevez pas souvent de courriers de ritesh.l...@gmail.com. >> Découvrez pourquoi ceci est important à >> https://aka.ms/LearnAboutSenderIdentification ] >> >> Enable kfence on book3s64 hash only when early init is enabled. >> This is because, kfence could cause the kernel linear map to be mapped >> at PAGE_SIZE level instead of 16M (which I guess we don't want). >> >> Also currently there is no way to - >> 1. Make multiple page size entries for the SLB used for kernel linear >> map. >> 2. No easy way of getting the hash slot details after the page table >> mapping for kernel linear setup. So even if kfence allocate the >> pool in late init, we won't be able to get the hash slot details in >> kfence linear map. >> >> Thus this patch disables kfence on hash if kfence early init is not >> enabled. >> >> Signed-off-by: Ritesh Harjani (IBM) <ritesh.l...@gmail.com> >> --- >> arch/powerpc/mm/book3s64/hash_utils.c | 5 ++++- >> 1 file changed, 4 insertions(+), 1 deletion(-) >> >> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c >> b/arch/powerpc/mm/book3s64/hash_utils.c >> index c66b9921fc7d..759dbcbf1483 100644 >> --- a/arch/powerpc/mm/book3s64/hash_utils.c >> +++ b/arch/powerpc/mm/book3s64/hash_utils.c >> @@ -410,6 +410,8 @@ static phys_addr_t kfence_pool; >> >> static inline void hash_kfence_alloc_pool(void) >> { >> + if (!kfence_early_init) >> + goto err; >> >> // allocate linear map for kfence within RMA region >> linear_map_kf_hash_count = KFENCE_POOL_SIZE >> PAGE_SHIFT; >> @@ -1074,7 +1076,8 @@ static void __init htab_init_page_sizes(void) >> bool aligned = true; >> init_hpte_page_sizes(); >> >> - if (!debug_pagealloc_enabled_or_kfence()) { >> + if (!debug_pagealloc_enabled() && >> + !(IS_ENABLED(CONFIG_KFENCE) && kfence_early_init)) { > > Looks complex, can we do simpler ? >
Yes, kfence_early_init anyway needs clean up. Will make it simpler. Thanks for the review! -ritesh