Re: [RFC v2 03/13] book3s64/hash: Remove kfence support temporarily

2024-09-18 Thread IBM
Christophe Leroy writes: > Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit : >> Kfence on book3s Hash on pseries is anyways broken. It fails to boot >> due to RMA size limitation. That is because, kfence with Hash uses >> debug_pagealloc infrastructure. debug_pagealloc allocates linear map >>

Re: [RFC v2 02/13] powerpc: mm: Fix kfence page fault reporting

2024-09-18 Thread IBM
Christophe Leroy writes: > Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit : >> copy_from_kernel_nofault() can be called when doing read of /proc/kcore. >> /proc/kcore can have some unmapped kfence objects which when read via >> copy_from_kernel_nofault() can cause page faults. Since *_nofaul

Re: [RFC v2 03/13] book3s64/hash: Remove kfence support temporarily

2024-09-18 Thread Christophe Leroy
Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit : Kfence on book3s Hash on pseries is anyways broken. It fails to boot due to RMA size limitation. That is because, kfence with Hash uses debug_pagealloc infrastructure. debug_pagealloc allocates linear map for entire dram size instead of jus

Re: [RFC v2 02/13] powerpc: mm: Fix kfence page fault reporting

2024-09-18 Thread Christophe Leroy
Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit : copy_from_kernel_nofault() can be called when doing read of /proc/kcore. /proc/kcore can have some unmapped kfence objects which when read via copy_from_kernel_nofault() can cause page faults. Since *_nofault() functions define their own fi

[powerpc:merge] BUILD SUCCESS 93a0594106c7caa79e118776eb9859ecc7993c7a

2024-09-18 Thread kernel test robot
clang-20 hexagon randconfig-002-20240919clang-20 i386 allmodconfiggcc-12 i386 allnoconfiggcc-12 i386 allyesconfiggcc-12 i386buildonly-randconfig-001-20240918cla

Re: [PATCH] crypto: Removing CRYPTO_AES_GCM_P10.

2024-09-18 Thread Michael Ellerman
Danny Tsen writes: > Removing CRYPTO_AES_GCM_P10 in Kconfig first so that we can apply the > subsequent patches to fix data mismatch over ipsec tunnel. This change log needs to stand on its own. ie. it needs to explain what the problem is and why the feature is being disabled, without reference t

Re: [RFC PATCH] powerpc/tlb: enable arch want batched unmap tlb flush

2024-09-18 Thread Luming Yu
On Thu, Sep 19, 2024 at 01:22:21PM +1000, Michael Ellerman wrote: > Luming Yu writes: > > From: Yu Luming > > > > ppc always do its own tracking for batch tlb. > > I don't think it does? :) > > I think you're referring to the batch handling in > arch/powerpc/include/asm/book3s/64/tlbflush-hash

Re: [PATCH v2] crash, powerpc: Default to CRASH_DUMP=n on PPC_BOOK3S_32

2024-09-18 Thread Michael Ellerman
Dave Vasilevsky writes: > Fixes boot failures on 6.9 on PPC_BOOK3S_32 machines using > Open Firmware. On these machines, the kernel refuses to boot > from non-zero PHYSICAL_START, which occurs when CRASH_DUMP is on. > > Since most PPC_BOOK3S_32 machines boot via Open Firmware, it should > default

Re: [RFC PATCH] powerpc/tlb: enable arch want batched unmap tlb flush

2024-09-18 Thread Michael Ellerman
Luming Yu writes: > From: Yu Luming > > ppc always do its own tracking for batch tlb. I don't think it does? :) I think you're referring to the batch handling in arch/powerpc/include/asm/book3s/64/tlbflush-hash.h ? But that's only used for 64-bit Book3S with the HPT MMU. > By trivially enabl

[RFC v2 12/13] book3s64/hash: Disable kfence if not early init

2024-09-18 Thread Ritesh Harjani (IBM)
Enable kfence on book3s64 hash only when early init is enabled. This is because, kfence could cause the kernel linear map to be mapped at PAGE_SIZE level instead of 16M (which I guess we don't want). Also currently there is no way to - 1. Make multiple page size entries for the SLB used for kernel

[RFC v2 13/13] book3s64/hash: Early detect debug_pagealloc size requirement

2024-09-18 Thread Ritesh Harjani (IBM)
Add hash_supports_debug_pagealloc() helper to detect whether debug_pagealloc can be supported on hash or not. This checks for both, whether debug_pagealloc config is enabled and the linear map should fit within rma_size/4 region size. This can then be used early during htab_init_page_sizes() to de

[RFC v2 11/13] book3s64/radix: Refactoring common kfence related functions

2024-09-18 Thread Ritesh Harjani (IBM)
Both radix and hash on book3s requires to detect if kfence early init is enabled or not. Hash needs to disable kfence if early init is not enabled because with kfence the linear map is mapped using PAGE_SIZE rather than 16M mapping. We don't support multiple page sizes for slb entry used for kernel

[RFC v2 09/13] book3s64/hash: Disable debug_pagealloc if it requires more memory

2024-09-18 Thread Ritesh Harjani (IBM)
Make size of the linear map to be allocated in RMA region to be of ppc64_rma_size / 4. If debug_pagealloc requires more memory than that then do not allocate any memory and disable debug_pagealloc. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 15 ++-

[RFC v2 10/13] book3s64/hash: Add kfence functionality

2024-09-18 Thread Ritesh Harjani (IBM)
Now that linear map functionality of debug_pagealloc is made generic, enable kfence to use this generic infrastructure. 1. Define kfence related linear map variables. - u8 *linear_map_kf_hash_slots; - unsigned long linear_map_kf_hash_count; - DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock);

[RFC v2 06/13] book3s64/hash: Add hash_debug_pagealloc_alloc_slots() function

2024-09-18 Thread Ritesh Harjani (IBM)
This adds hash_debug_pagealloc_alloc_slots() function instead of open coding that in htab_initialize(). This is required since we will be separating the kfence functionality to not depend upon debug_pagealloc. Now that everything required for debug_pagealloc is under a #ifdef config. Bring in line

[RFC v2 08/13] book3s64/hash: Make kernel_map_linear_page() generic

2024-09-18 Thread Ritesh Harjani (IBM)
Currently kernel_map_linear_page() function assumes to be working on linear_map_hash_slots array. But since in later patches we need a separate linear map array for kfence, hence make kernel_map_linear_page() take a linear map array and lock in it's function argument. This is needed to separate ou

[RFC v2 07/13] book3s64/hash: Refactor hash__kernel_map_pages() function

2024-09-18 Thread Ritesh Harjani (IBM)
This refactors hash__kernel_map_pages() function to call hash_debug_pagealloc_map_pages(). This will come useful when we will add kfence support. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 9 - 1 file changed,

[RFC v2 05/13] book3s64/hash: Add hash_debug_pagealloc_add_slot() function

2024-09-18 Thread Ritesh Harjani (IBM)
This adds hash_debug_pagealloc_add_slot() function instead of open coding that in htab_bolt_mapping(). This is required since we will be separating kfence functionality to not depend upon debug_pagealloc. No functionality change in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerp

[RFC v2 04/13] book3s64/hash: Refactor kernel linear map related calls

2024-09-18 Thread Ritesh Harjani (IBM)
This just brings all linear map related handling at one place instead of having those functions scattered in hash_utils file. Makes it easy for review. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 164 +--

[RFC v2 03/13] book3s64/hash: Remove kfence support temporarily

2024-09-18 Thread Ritesh Harjani (IBM)
Kfence on book3s Hash on pseries is anyways broken. It fails to boot due to RMA size limitation. That is because, kfence with Hash uses debug_pagealloc infrastructure. debug_pagealloc allocates linear map for entire dram size instead of just kfence relevant objects. This means for 16TB of DRAM it w

[RFC v2 02/13] powerpc: mm: Fix kfence page fault reporting

2024-09-18 Thread Ritesh Harjani (IBM)
copy_from_kernel_nofault() can be called when doing read of /proc/kcore. /proc/kcore can have some unmapped kfence objects which when read via copy_from_kernel_nofault() can cause page faults. Since *_nofault() functions define their own fixup table for handling fault, use that instead of asking kf

[RFC v2 01/13] mm/kfence: Add a new kunit test test_use_after_free_read_nofault()

2024-09-18 Thread Ritesh Harjani (IBM)
From: Nirjhar Roy Faults from copy_from_kernel_nofault() needs to be handled by fixup table and should not be handled by kfence. Otherwise while reading /proc/kcore which uses copy_from_kernel_nofault(), kfence can generate false negatives. This can happen when /proc/kcore ends up reading an unma

[RFC v2 00/13] powerpc/kfence: Improve kfence support

2024-09-18 Thread Ritesh Harjani (IBM)
This patch series addresses following to improve kfence support on Powerpc. 1. Usage of copy_from_kernel_nofault() within kernel, such as read from /proc/kcore can cause kfence to report false negatives. 2. (book3s64) Kfence depends upon debug_pagealloc infrastructure on Hash. debug_pageall

Re: [RFC PATCH v3 0/6] ASoC: fsl: add memory to memory function for ASRC

2024-09-18 Thread Shengjiu Wang
Hi Jaroslav On Fri, Sep 13, 2024 at 10:29 AM Shengjiu Wang wrote: > > On Fri, Sep 6, 2024 at 6:05 PM Shengjiu Wang wrote: > > > > This function is base on the accelerator implementation > > for compress API: > > https://patchwork.kernel.org/project/alsa-devel/patch/20240731083843.59911-1-pe...@p

[GIT PULL] Please pull powerpc/linux.git powerpc-6.12-1 tag

2024-09-18 Thread Michael Ellerman
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hi Linus, Please pull powerpc updates for 6.12. No conflicts that I'm aware of. The VDSO changes have already been merged via the random tree. cheers The following changes since commit de9c2c66ad8e787abec7c9d7eff4f8c3cdd28aed: Linux 6.11-rc2 (2

Re: [PATCH 1/2] powerpc/entry: convert to common and generic entry

2024-09-18 Thread Christophe Leroy
Hi, Le 14/09/2024 à 04:22, Luming Yu a écrit : On Fri, Sep 13, 2024 at 02:15:40PM +0200, Christophe Leroy wrote: Le 13/09/2024 à 14:02, Luming Yu a écrit : ... nothing happens after that. reproduced with ppc64_defconfig [0.818972][T1] Run /init as init process [5.851684][ T240

[powerpc:next] BUILD SUCCESS 39190ac7cff1fd15135fa8e658030d9646fdb5f2

2024-09-18 Thread kernel test robot
allnoconfigclang-18 i386 allnoconfiggcc-12 i386 allyesconfigclang-18 i386 allyesconfiggcc-12 i386buildonly-randconfig-001-20240918clang-18 i386buildonly

[RFC PATCH] powerpc/tlb: enable arch want batched unmap tlb flush

2024-09-18 Thread Luming Yu
From: Yu Luming ppc always do its own tracking for batch tlb. By trivially enabling the ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH in ppc, ppc arch can re-use common code in rmap and reduce overhead and do optimization it could not have without a tlb flushing context at low architecture level. Signed-off