Christophe Leroy writes:
> Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit :
>> Kfence on book3s Hash on pseries is anyways broken. It fails to boot
>> due to RMA size limitation. That is because, kfence with Hash uses
>> debug_pagealloc infrastructure. debug_pagealloc allocates linear map
>>
Christophe Leroy writes:
> Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit :
>> copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
>> /proc/kcore can have some unmapped kfence objects which when read via
>> copy_from_kernel_nofault() can cause page faults. Since *_nofaul
Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit :
Kfence on book3s Hash on pseries is anyways broken. It fails to boot
due to RMA size limitation. That is because, kfence with Hash uses
debug_pagealloc infrastructure. debug_pagealloc allocates linear map
for entire dram size instead of jus
Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit :
copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
/proc/kcore can have some unmapped kfence objects which when read via
copy_from_kernel_nofault() can cause page faults. Since *_nofault()
functions define their own fi
clang-20
hexagon randconfig-002-20240919clang-20
i386 allmodconfiggcc-12
i386 allnoconfiggcc-12
i386 allyesconfiggcc-12
i386buildonly-randconfig-001-20240918cla
Danny Tsen writes:
> Removing CRYPTO_AES_GCM_P10 in Kconfig first so that we can apply the
> subsequent patches to fix data mismatch over ipsec tunnel.
This change log needs to stand on its own. ie. it needs to explain what
the problem is and why the feature is being disabled, without reference
t
On Thu, Sep 19, 2024 at 01:22:21PM +1000, Michael Ellerman wrote:
> Luming Yu writes:
> > From: Yu Luming
> >
> > ppc always do its own tracking for batch tlb.
>
> I don't think it does? :)
>
> I think you're referring to the batch handling in
> arch/powerpc/include/asm/book3s/64/tlbflush-hash
Dave Vasilevsky writes:
> Fixes boot failures on 6.9 on PPC_BOOK3S_32 machines using
> Open Firmware. On these machines, the kernel refuses to boot
> from non-zero PHYSICAL_START, which occurs when CRASH_DUMP is on.
>
> Since most PPC_BOOK3S_32 machines boot via Open Firmware, it should
> default
Luming Yu writes:
> From: Yu Luming
>
> ppc always do its own tracking for batch tlb.
I don't think it does? :)
I think you're referring to the batch handling in
arch/powerpc/include/asm/book3s/64/tlbflush-hash.h ?
But that's only used for 64-bit Book3S with the HPT MMU.
> By trivially enabl
Enable kfence on book3s64 hash only when early init is enabled.
This is because, kfence could cause the kernel linear map to be mapped
at PAGE_SIZE level instead of 16M (which I guess we don't want).
Also currently there is no way to -
1. Make multiple page size entries for the SLB used for kernel
Add hash_supports_debug_pagealloc() helper to detect whether
debug_pagealloc can be supported on hash or not. This checks for both,
whether debug_pagealloc config is enabled and the linear map should
fit within rma_size/4 region size.
This can then be used early during htab_init_page_sizes() to de
Both radix and hash on book3s requires to detect if kfence
early init is enabled or not. Hash needs to disable kfence
if early init is not enabled because with kfence the linear map is
mapped using PAGE_SIZE rather than 16M mapping.
We don't support multiple page sizes for slb entry used for kernel
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15 ++-
Now that linear map functionality of debug_pagealloc is made generic,
enable kfence to use this generic infrastructure.
1. Define kfence related linear map variables.
- u8 *linear_map_kf_hash_slots;
- unsigned long linear_map_kf_hash_count;
- DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock);
This adds hash_debug_pagealloc_alloc_slots() function instead of open
coding that in htab_initialize(). This is required since we will be
separating the kfence functionality to not depend upon debug_pagealloc.
Now that everything required for debug_pagealloc is under a #ifdef
config. Bring in line
Currently kernel_map_linear_page() function assumes to be working on
linear_map_hash_slots array. But since in later patches we need a
separate linear map array for kfence, hence make
kernel_map_linear_page() take a linear map array and lock in it's
function argument.
This is needed to separate ou
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed,
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerp
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164 +--
Kfence on book3s Hash on pseries is anyways broken. It fails to boot
due to RMA size limitation. That is because, kfence with Hash uses
debug_pagealloc infrastructure. debug_pagealloc allocates linear map
for entire dram size instead of just kfence relevant objects.
This means for 16TB of DRAM it w
copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
/proc/kcore can have some unmapped kfence objects which when read via
copy_from_kernel_nofault() can cause page faults. Since *_nofault()
functions define their own fixup table for handling fault, use that
instead of asking kf
From: Nirjhar Roy
Faults from copy_from_kernel_nofault() needs to be handled by fixup
table and should not be handled by kfence. Otherwise while reading
/proc/kcore which uses copy_from_kernel_nofault(), kfence can generate
false negatives. This can happen when /proc/kcore ends up reading an
unma
This patch series addresses following to improve kfence support on Powerpc.
1. Usage of copy_from_kernel_nofault() within kernel, such as read from
/proc/kcore can cause kfence to report false negatives.
2. (book3s64) Kfence depends upon debug_pagealloc infrastructure on Hash.
debug_pageall
Hi Jaroslav
On Fri, Sep 13, 2024 at 10:29 AM Shengjiu Wang wrote:
>
> On Fri, Sep 6, 2024 at 6:05 PM Shengjiu Wang wrote:
> >
> > This function is base on the accelerator implementation
> > for compress API:
> > https://patchwork.kernel.org/project/alsa-devel/patch/20240731083843.59911-1-pe...@p
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi Linus,
Please pull powerpc updates for 6.12. No conflicts that I'm aware of. The VDSO
changes have already been merged via the random tree.
cheers
The following changes since commit de9c2c66ad8e787abec7c9d7eff4f8c3cdd28aed:
Linux 6.11-rc2 (2
Hi,
Le 14/09/2024 à 04:22, Luming Yu a écrit :
On Fri, Sep 13, 2024 at 02:15:40PM +0200, Christophe Leroy wrote:
Le 13/09/2024 à 14:02, Luming Yu a écrit :
...
nothing happens after that.
reproduced with ppc64_defconfig
[0.818972][T1] Run /init as init process
[5.851684][ T240
allnoconfigclang-18
i386 allnoconfiggcc-12
i386 allyesconfigclang-18
i386 allyesconfiggcc-12
i386buildonly-randconfig-001-20240918clang-18
i386buildonly
From: Yu Luming
ppc always do its own tracking for batch tlb. By trivially enabling
the ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH in ppc, ppc arch can re-use
common code in rmap and reduce overhead and do optimization it could not
have without a tlb flushing context at low architecture level.
Signed-off
28 matches
Mail list logo