Re: [RFC PATCH] powerpc: Add check to select PPC_RADIX_BROADCAST_TLBIE

2025-04-10 Thread Ritesh Harjani (IBM)
n the shared link, indeed had an unmet dependency. i.e. CONFIG_PPC_64S_HASH_MMU=y # CONFIG_PPC_RADIX_MMU is not set CONFIG_PPC_RADIX_BROADCAST_TLBIE=y So, the fix look good to me. Please feel free to take: Reviewed-by: Ritesh Harjani (IBM) > --- > arch/powerpc/platforms/powernv/Kconfig | 2 +-

Re: [linux-next-20250320][btrfs] Kernel OOPs while running btrfs/108

2025-03-21 Thread Ritesh Harjani (IBM)
+linux-btrfs Venkat Rao Bagalkote writes: > Greetings!!! > > > I am observing Kernel oops while running brtfs/108 TC on IBM Power System. > > Repo: Linux-Next (next-20250320) Looks like this next tag had many btrfs related changes - https://web.git.kernel.org/pub/scm/linux/kernel/git/next/lin

Re: [PATCH v2] KVM: PPC: Enable CAP_SPAPR_TCE_VFIO on pSeries KVM guests

2025-02-16 Thread Ritesh Harjani (IBM)
n [1]. But looks good otherwise. With that addressed in the commit message, please feel free to add - Reviewed-by: Ritesh Harjani (IBM) -ritesh > > arch/powerpc/kvm/powerpc.c | 5 + > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/arch/powerpc/kvm/powerpc.c b/

Re: [PATCH] arch/powerpc: Remove unused function icp_native_cause_ipi_rm()

2025-01-11 Thread Ritesh Harjani (IBM)
> arch/powerpc/sysdev/xics/icp-native.c | 21 - > 2 files changed, 22 deletions(-) Indeed there are no callers left of this function. Great catch! Looks good to me. Please feel free to add - Reviewed-by: Ritesh Harjani (IBM) -ritesh

Re: [PATCH] KVM: PPC: Enable CAP_SPAPR_TCE_VFIO on pSeries KVM guests

2025-01-10 Thread Ritesh Harjani (IBM)
Amit Machhiwal writes: > Currently, on book3s-hv, the capability KVM_CAP_SPAPR_TCE_VFIO is only > available for KVM Guests running on PowerNV and not for the KVM guests > running on pSeries hypervisors. IIUC it was said here [1] that this capability is not available on pSeries, hence it got rem

Re: [PATCH 1/3] selftest/powerpc/ptrace/core-pkey: Remove duplicate macros

2024-12-16 Thread Ritesh Harjani (IBM)
an this up and consolidate the common header definitions into pkeys.h header file. The changes looks good to me. Please feel free to add - Reviewed-by: Ritesh Harjani (IBM) -ritesh

Re: [PATCH v3] powerpc/pseries/eeh: Fix get PE state translation

2024-12-14 Thread Ritesh Harjani (IBM)
series as well for the callers to know, whether the eeh recovery is completed. This looks good to me. Please feel free to add - Reviewed-by: Ritesh Harjani (IBM) -ritesh

Re: [PATCH] powerpc/pseries/eeh: Fix get PE state translation

2024-11-15 Thread Ritesh Harjani (IBM)
Narayana Murty N writes: > The PE Reset State "0" obtained from RTAS calls > ibm_read_slot_reset_[state|state2] indicates that > the Reset is deactivated and the PE is not in the MMIO > Stopped or DMA Stopped state. > > With PE Reset State "0", the MMIO and DMA is allowed for > the PE. Looking a

[PATCH v4 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()

2024-11-13 Thread Ritesh Harjani (IBM)
let's enforce pageblock_order to be non-zero during cma_init_reserved_mem() to catch such wrong usages. Acked-by: David Hildenbrand Acked-by: Zi Yan Reviewed-by: Anshuman Khandual Signed-off-by: Ritesh Harjani (IBM) --- RFCv3 -> v4: 1. Dropped RFC tagged as requested by Andrew. 2. Upd

Re: [PATCH v2 1/2] powerpc/fadump: allocate memory for additional parameters early

2024-11-11 Thread Ritesh Harjani (IBM)
erpc/kernel/prom.c > @@ -908,6 +908,9 @@ void __init early_init_devtree(void *params) > > mmu_early_init_devtree(); > > + /* Setup param area for passing additional parameters to fadump capture > kernel. */ > + fadump_setup_param_area(); > + Maybe we should add

Re: [PATCH v2 2/2] fadump: reserve param area if below boot_mem_top

2024-11-11 Thread Ritesh Harjani (IBM)
Sourabh Jain writes: > The param area is a memory region where the kernel places additional > command-line arguments for fadump kernel. Currently, the param memory > area is reserved in fadump kernel if it is above boot_mem_top. However, > it should be reserved if it is below boot_mem_top because

Re: [PATCH v3] KVM: PPC: Book3S HV: Mask off LPCR_MER for a vCPU before running it to avoid spurious interrupts

2024-11-06 Thread Ritesh Harjani (IBM)
> pending > + * external interrupts. Hence, explicity mask off MER > bit > + * here as otherwise it may generate spurious > interrupts in L2 KVM > + * causing an endless loop, which results in L2 guest > g

Re: [PATCH v2] KVM: PPC: Book3S HV: Mask off LPCR_MER for a vCPU before running it to avoid spurious interrupts

2024-10-24 Thread Ritesh Harjani (IBM)
Gautam Menghani writes: > Mask off the LPCR_MER bit before running a vCPU to ensure that it is not > set if there are no pending interrupts. Running a vCPU with LPCR_MER bit > set and no pending interrupts results in L2 vCPU getting an infinite flood > of spurious interrupts. The 'if check' in kv

[PATCH v3 01/12] powerpc: mm/fault: Fix kfence page fault reporting

2024-10-18 Thread Ritesh Harjani (IBM)
ned-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/fault.c | 11 +-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 81c77ddce2e3..316f5162ffc4 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c

[PATCH v3] mm/kfence: Add a new kunit test test_use_after_free_read_nofault()

2024-10-18 Thread Ritesh Harjani (IBM)
unmapped address from kfence pool. Let's add a testcase to cover this case. Co-developed-by: Ritesh Harjani (IBM) Signed-off-by: Nirjhar Roy Signed-off-by: Ritesh Harjani (IBM) --- Will be nice if we can get some feedback on this. v2 -> v3: = 1. Separated out this kfence kunit t

[PATCH v4 3/3] powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init()

2024-10-18 Thread Ritesh Harjani (IBM)
_thread+0x14/0x1c Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment") Suggested-by: David Hildenbrand Reported-by: Sachin P Bappalige Acked-by: Hari Bathini Reviewed-by: Madhavan Srinivasan Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/fad

[PATCH v3 12/12] book3s64/hash: Early detect debug_pagealloc size requirement

2024-10-18 Thread Ritesh Harjani (IBM)
decide linear map pagesize if hash supports either debug_pagealloc or kfence. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 25 + 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch

[PATCH v3 10/12] book3s64/radix: Refactoring common kfence related functions

2024-10-18 Thread Ritesh Harjani (IBM)
d for kernel linear map in book3s64. This patch refactors out the common functions required to detect kfence early init is enabled or not. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h| 8 ++-- arch/powerpc/mm/book3s64/pgtable.c | 13 +

[PATCH v3 11/12] book3s64/hash: Disable kfence if not early init

2024-10-18 Thread Ritesh Harjani (IBM)
if kfence early init is not enabled. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index 558d6f5202b9..2f5dd6310a8f 10

[PATCH v3 09/12] book3s64/hash: Add kfence functionality

2024-10-18 Thread Ritesh Harjani (IBM)
= 32MB) 4. The hash slot information for kfence memory gets added in linear map in hash_linear_map_add_slot() (which also adds for debug_pagealloc). Reported-by: Pavithra Prakash Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h | 5 - arch/powerpc/mm/book3s64/has

[PATCH v3 08/12] book3s64/hash: Disable debug_pagealloc if it requires more memory

2024-10-18 Thread Ritesh Harjani (IBM)
Make size of the linear map to be allocated in RMA region to be of ppc64_rma_size / 4. If debug_pagealloc requires more memory than that then do not allocate any memory and disable debug_pagealloc. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 15

[PATCH v3 07/12] book3s64/hash: Make kernel_map_linear_page() generic

2024-10-18 Thread Ritesh Harjani (IBM)
arate out kfence from debug_pagealloc infrastructure. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 47 ++- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/boo

[PATCH v3 06/12] book3s64/hash: Refactor hash__kernel_map_pages() function

2024-10-18 Thread Ritesh Harjani (IBM)
This refactors hash__kernel_map_pages() function to call hash_debug_pagealloc_map_pages(). This will come useful when we will add kfence support. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 9 - 1 file changed

[PATCH v3 05/12] book3s64/hash: Add hash_debug_pagealloc_alloc_slots() function

2024-10-18 Thread Ritesh Harjani (IBM)
linear_map_hash_slots and linear_map_hash_count variables under the same config too. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 29 --- 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c

[PATCH v3 04/12] book3s64/hash: Add hash_debug_pagealloc_add_slot() function

2024-10-18 Thread Ritesh Harjani (IBM)
This adds hash_debug_pagealloc_add_slot() function instead of open coding that in htab_bolt_mapping(). This is required since we will be separating kfence functionality to not depend upon debug_pagealloc. No functionality change in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch

[PATCH v3 03/12] book3s64/hash: Refactor kernel linear map related calls

2024-10-18 Thread Ritesh Harjani (IBM)
This just brings all linear map related handling at one place instead of having those functions scattered in hash_utils file. Makes it easy for review. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 164

[PATCH v3 02/12] book3s64/hash: Remove kfence support temporarily

2024-10-18 Thread Ritesh Harjani (IBM)
eeds some refactoring. We will bring in kfence on Hash support in later patches. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h | 5 + arch/powerpc/mm/book3s64/hash_utils.c | 16 +++- 2 files changed, 16 insertions(+), 5 deletions(-) diff --git a/arc

[PATCH v3 00/12] powerpc/kfence: Improve kfence support (mainly Hash)

2024-10-18 Thread Ritesh Harjani (IBM)
kunit testcase patch-1. 2. Fixed a false negative with copy_from_kernel_nofault() in patch-2. 3. Addressed review comments from Christophe Leroy. 4. Added patch-13. Ritesh Harjani (IBM) (12): powerpc: mm/fault: Fix kfence page fault reporting book3s64/hash: Remove kfence support temporarily boo

[PATCH v4 1/3] powerpc/fadump: Refactor and prepare fadump_cma_init for late init

2024-10-18 Thread Ritesh Harjani (IBM)
s false or dump_active, so that in later patches we can call fadump_cma_init() separately from setup_arch(). Acked-by: Hari Bathini Reviewed-by: Madhavan Srinivasan Signed-off-by: Ritesh Harjani (IBM) --- v3 -> v4 = 1. Dropped RFC tag. 2. Updated commit subject from fadump: <>

[PATCH v4 2/3] powerpc/fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem

2024-10-18 Thread Ritesh Harjani (IBM)
later in setup_arch() where pageblock_order is non-zero. Suggested-by: Sourabh Jain Acked-by: Hari Bathini Reviewed-by: Madhavan Srinivasan Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/kernel/fadump.c | 34 ++ 1 file changed, 22 insertions(+), 12

[RFC RESEND v2 13/13] book3s64/hash: Early detect debug_pagealloc size requirement

2024-10-14 Thread Ritesh Harjani (IBM)
decide linear map pagesize if hash supports either debug_pagealloc or kfence. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 25 + 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch

[RFC RESEND v2 12/13] book3s64/hash: Disable kfence if not early init

2024-10-14 Thread Ritesh Harjani (IBM)
if kfence early init is not enabled. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index 53e6f3a524eb..b6da25719e37 10

[RFC RESEND v2 11/13] book3s64/radix: Refactoring common kfence related functions

2024-10-14 Thread Ritesh Harjani (IBM)
d for kernel linear map in book3s64. This patch refactors out the common functions required to detect kfence early init is enabled or not. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h| 8 ++-- arch/powerpc/mm/book3s64/pgtable.c | 13 +

[RFC RESEND v2 10/13] book3s64/hash: Add kfence functionality

2024-10-14 Thread Ritesh Harjani (IBM)
= 32MB) 4. The hash slot information for kfence memory gets added in linear map in hash_linear_map_add_slot() (which also adds for debug_pagealloc). Reported-by: Pavithra Prakash Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h | 5 - arch/powerpc/mm/book3s64/has

[RFC RESEND v2 09/13] book3s64/hash: Disable debug_pagealloc if it requires more memory

2024-10-14 Thread Ritesh Harjani (IBM)
Make size of the linear map to be allocated in RMA region to be of ppc64_rma_size / 4. If debug_pagealloc requires more memory than that then do not allocate any memory and disable debug_pagealloc. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 15

[RFC RESEND v2 08/13] book3s64/hash: Make kernel_map_linear_page() generic

2024-10-14 Thread Ritesh Harjani (IBM)
arate out kfence from debug_pagealloc infrastructure. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 47 ++- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/boo

[RFC RESEND v2 07/13] book3s64/hash: Refactor hash__kernel_map_pages() function

2024-10-14 Thread Ritesh Harjani (IBM)
This refactors hash__kernel_map_pages() function to call hash_debug_pagealloc_map_pages(). This will come useful when we will add kfence support. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 9 - 1 file changed

[RFC RESEND v2 06/13] book3s64/hash: Add hash_debug_pagealloc_alloc_slots() function

2024-10-14 Thread Ritesh Harjani (IBM)
linear_map_hash_slots and linear_map_hash_count variables under the same config too. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 29 --- 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c

[RFC RESEND v2 05/13] book3s64/hash: Add hash_debug_pagealloc_add_slot() function

2024-10-14 Thread Ritesh Harjani (IBM)
This adds hash_debug_pagealloc_add_slot() function instead of open coding that in htab_bolt_mapping(). This is required since we will be separating kfence functionality to not depend upon debug_pagealloc. No functionality change in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch

[RFC RESEND v2 04/13] book3s64/hash: Refactor kernel linear map related calls

2024-10-14 Thread Ritesh Harjani (IBM)
This just brings all linear map related handling at one place instead of having those functions scattered in hash_utils file. Makes it easy for review. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 164

[RFC RESEND v2 03/13] book3s64/hash: Remove kfence support temporarily

2024-10-14 Thread Ritesh Harjani (IBM)
eeds some refactoring. We will bring in kfence on Hash support in later patches. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h | 5 + arch/powerpc/mm/book3s64/hash_utils.c | 16 +++- 2 files changed, 16 insertions(+), 5 deletions(-) diff --git a/arc

[RFC RESEND v2 02/13] powerpc: mm: Fix kfence page fault reporting

2024-10-14 Thread Ritesh Harjani (IBM)
PPC32") Reported-by: Disha Goel Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/fault.c | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 81c77ddce2e3..fa825198f29f 100644 --- a/arch/powerpc/mm/fault.c

[RFC RESEND v2 01/13] mm/kfence: Add a new kunit test test_use_after_free_read_nofault()

2024-10-14 Thread Ritesh Harjani (IBM)
unmapped address from kfence pool. Let's add a testcase to cover this case. Co-developed-by: Ritesh Harjani (IBM) Signed-off-by: Ritesh Harjani (IBM) Signed-off-by: Nirjhar Roy Cc: kasan-...@googlegroups.com Cc: Alexander Potapenko Cc: linux...@kvack.org --- mm/kfence/kfence_test.c

[RFC RESEND v2 00/13] powerpc/kfence: Improve kfence support

2024-10-14 Thread Ritesh Harjani (IBM)
d a kunit testcase patch-1. 2. Fixed a false negative with copy_from_kernel_nofault() in patch-2. 3. Addressed review comments from Christophe Leroy. 4. Added patch-13. Nirjhar Roy (1): mm/kfence: Add a new kunit test test_use_after_free_read_nofault() Ritesh Harjani (IBM) (12): powerpc:

[RFC v3 3/3] fadump: Move fadump_cma_init to setup_arch() after initmem_init()

2024-10-11 Thread Ritesh Harjani (IBM)
om_kernel_user_thread+0x14/0x1c Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment") Suggested-by: David Hildenbrand Reported-by: Sachin P Bappalige Acked-by: Hari Bathini Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/fadump.h | 7 +++ arch/powerp

[RFC v3 2/3] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem

2024-10-11 Thread Ritesh Harjani (IBM)
later in setup_arch() where pageblock_order is non-zero. Suggested-by: Sourabh Jain Acked-by: Hari Bathini Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/kernel/fadump.c | 34 ++ 1 file changed, 22 insertions(+), 12 deletions(-) diff --git a/arch/powerpc

[RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init

2024-10-11 Thread Ritesh Harjani (IBM)
s false or dump_active, so that in later patches we can call fadump_cma_init() separately from setup_arch(). Acked-by: Hari Bathini Signed-off-by: Ritesh Harjani (IBM) --- v2 -> v3: Separated the series into 2 as discussed in v2. [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512

[RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()

2024-10-11 Thread Ritesh Harjani (IBM)
let's enforce pageblock_order to be non-zero during cma_init_reserved_mem(). Acked-by: David Hildenbrand Signed-off-by: Ritesh Harjani (IBM) --- v2 -> v3: Separated the series into 2 as discussed in v2. [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.l...@gmail.c

[RFC v2 4/4] fadump: Move fadump_cma_init to setup_arch() after initmem_init()

2024-10-11 Thread Ritesh Harjani (IBM)
om_kernel_user_thread+0x14/0x1c Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment") Suggested-by: David Hildenbrand Reported-by: Sachin P Bappalige Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/fadump.h | 7 +++ arch/powerpc/kernel/fadump.c |

[RFC v2 3/4] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem

2024-10-11 Thread Ritesh Harjani (IBM)
later in setup_arch() where pageblock_order is non-zero. Suggested-by: Sourabh Jain Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/kernel/fadump.c | 34 ++ 1 file changed, 22 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kernel/fadump.c b/arch

[RFC v2 2/4] fadump: Refactor and prepare fadump_cma_init for late init

2024-10-11 Thread Ritesh Harjani (IBM)
s false or dump_active, so that in later patches we can call fadump_cma_init() separately from setup_arch(). Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/kernel/fadump.c | 23 +-- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kernel/fadum

[RFC v2 1/4] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()

2024-10-11 Thread Ritesh Harjani (IBM)
let's enforce pageblock_order to be non-zero during cma_init_reserved_mem(). Signed-off-by: Ritesh Harjani (IBM) --- mm/cma.c | 9 + 1 file changed, 9 insertions(+) diff --git a/mm/cma.c b/mm/cma.c index 3e9724716bad..36d753e7a0bf 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -182,6 +1

[RFC v2 0/4] cma: powerpc fadump fixes

2024-10-11 Thread Ritesh Harjani (IBM)
ed. [v1]: https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.l...@gmail.com/ Ritesh Harjani (IBM) (4): cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() fadump: Refactor and prepare fadump_cma_init for late init fadump: Rese

[RFC 2/2] fadump: Make fadump reserve_dump_area_start CMA aligned in case of holes

2024-10-08 Thread Ritesh Harjani (IBM)
ernel code, 4544K rwdata, 17280K rodata, 9216K init, 2212K bss, 218432K reserved, 4210688K cma-reserved) Reported-by: Sourabh Jain Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/kernel/fadump.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kernel/

[RFC 1/2] cma: Fix CMA_MIN_ALIGNMENT_BYTES during early_init

2024-10-08 Thread Ritesh Harjani (IBM)
ee_unref_page_commit+0x3d4/0x4e4 free_unref_page+0x458/0x6d0 init_cma_reserved_pageblock+0x114/0x198 cma_init_reserved_areas+0x270/0x3e0 do_one_initcall+0x80/0x2f8 kernel_init_freeable+0x33c/0x530 kernel_init+0x34/0x26c ret_from_kernel_user_thread+0x14/0x1c Reported-by: Sachin P Bappalige Si

[RFC / PoC v1 1/1] powerpc: Add support for batched unmap TLB flush

2024-09-22 Thread Ritesh Harjani (IBM)
=== NOT FOR MERGE YET === This adds the support for ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH. More details are added to the cover letter. --- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/book3s/64/tlbflush.h | 5 +++ arch/powerpc/include/asm/tlbbatch.h | 14

[RFC / PoC v1 0/1] powerpc: Add support for batched unmap TLB flush

2024-09-22 Thread Ritesh Harjani (IBM)
(void)p[i]; } /* swap out */ madvise(p, SIZE, MADV_PAGEOUT); } } Ritesh Harjani (IBM) (1): powerpc: Add support for batched unmap TLB flush arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/book3s/64

[RFC v2 12/13] book3s64/hash: Disable kfence if not early init

2024-09-18 Thread Ritesh Harjani (IBM)
if kfence early init is not enabled. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index 53e6f3a524eb..b6da25719e37 10

[RFC v2 13/13] book3s64/hash: Early detect debug_pagealloc size requirement

2024-09-18 Thread Ritesh Harjani (IBM)
decide linear map pagesize if hash supports either debug_pagealloc or kfence. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 25 + 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch

[RFC v2 11/13] book3s64/radix: Refactoring common kfence related functions

2024-09-18 Thread Ritesh Harjani (IBM)
d for kernel linear map in book3s64. This patch refactors out the common functions required to detect kfence early init is enabled or not. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h| 8 ++-- arch/powerpc/mm/book3s64/pgtable.c | 13 +

[RFC v2 09/13] book3s64/hash: Disable debug_pagealloc if it requires more memory

2024-09-18 Thread Ritesh Harjani (IBM)
Make size of the linear map to be allocated in RMA region to be of ppc64_rma_size / 4. If debug_pagealloc requires more memory than that then do not allocate any memory and disable debug_pagealloc. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 15

[RFC v2 10/13] book3s64/hash: Add kfence functionality

2024-09-18 Thread Ritesh Harjani (IBM)
= 32MB) 4. The hash slot information for kfence memory gets added in linear map in hash_linear_map_add_slot() (which also adds for debug_pagealloc). Reported-by: Pavithra Prakash Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h | 5 - arch/powerpc/mm/book3s64/has

[RFC v2 06/13] book3s64/hash: Add hash_debug_pagealloc_alloc_slots() function

2024-09-18 Thread Ritesh Harjani (IBM)
linear_map_hash_slots and linear_map_hash_count variables under the same config too. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 29 --- 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c

[RFC v2 08/13] book3s64/hash: Make kernel_map_linear_page() generic

2024-09-18 Thread Ritesh Harjani (IBM)
arate out kfence from debug_pagealloc infrastructure. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 47 ++- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/boo

[RFC v2 07/13] book3s64/hash: Refactor hash__kernel_map_pages() function

2024-09-18 Thread Ritesh Harjani (IBM)
This refactors hash__kernel_map_pages() function to call hash_debug_pagealloc_map_pages(). This will come useful when we will add kfence support. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 9 - 1 file changed

[RFC v2 05/13] book3s64/hash: Add hash_debug_pagealloc_add_slot() function

2024-09-18 Thread Ritesh Harjani (IBM)
This adds hash_debug_pagealloc_add_slot() function instead of open coding that in htab_bolt_mapping(). This is required since we will be separating kfence functionality to not depend upon debug_pagealloc. No functionality change in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch

[RFC v2 04/13] book3s64/hash: Refactor kernel linear map related calls

2024-09-18 Thread Ritesh Harjani (IBM)
This just brings all linear map related handling at one place instead of having those functions scattered in hash_utils file. Makes it easy for review. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 164

[RFC v2 03/13] book3s64/hash: Remove kfence support temporarily

2024-09-18 Thread Ritesh Harjani (IBM)
eeds some refactoring. We will bring in kfence on Hash support in later patches. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h | 5 + arch/powerpc/mm/book3s64/hash_utils.c | 16 +++- 2 files changed, 16 insertions(+), 5 deletions(-) diff --git a/arc

[RFC v2 02/13] powerpc: mm: Fix kfence page fault reporting

2024-09-18 Thread Ritesh Harjani (IBM)
PPC32") Reported-by: Disha Goel Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/fault.c | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 81c77ddce2e3..fa825198f29f 100644 --- a/arch/powerpc/mm/fault.c

[RFC v2 01/13] mm/kfence: Add a new kunit test test_use_after_free_read_nofault()

2024-09-18 Thread Ritesh Harjani (IBM)
unmapped address from kfence pool. Let's add a testcase to cover this case. Co-developed-by: Ritesh Harjani (IBM) Signed-off-by: Ritesh Harjani (IBM) Signed-off-by: Nirjhar Roy Cc: kasan-...@googlegroups.com Cc: Alexander Potapenko Cc: linux...@kvack.org --- mm/kfence/kfence_test.c

[RFC v2 00/13] powerpc/kfence: Improve kfence support

2024-09-18 Thread Ritesh Harjani (IBM)
ather than 16MB mapping. v1 -> v2: = 1. Added a kunit testcase patch-1. 2. Fixed a false negative with copy_from_kernel_nofault() in patch-2. 3. Addressed review comments from Christophe Leroy. 4. Added patch-13. Nirjhar Roy (1): mm/kfence: Add a new kunit test test_use_after_free_read

[PATCH] powerpc: Use printk instead of WARN in change_memory_attr

2024-08-27 Thread Ritesh Harjani (IBM)
Use pr_warn_once instead of WARN_ON_ONCE as discussed here [1] for printing possible use of set_memory_* on linear map on Hash. [1]: https://lore.kernel.org/all/877cc2fpi2.fsf@mail.lhotse/#t Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/pageattr.c | 5 - 1 file changed, 4

[RFC v1 10/10] book3s64/hash: Disable kfence if not early init

2024-07-31 Thread Ritesh Harjani (IBM)
if kfence early init is not enabled. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index c66b9921fc7d..759dbcbf1

[RFC v1 09/10] book3s64/radix: Refactoring common kfence related functions

2024-07-31 Thread Ritesh Harjani (IBM)
d for kernel linear map in book3s64. This patch refactors out the common functions required to detect kfence early init is enabled or not. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h| 2 ++ arch/powerpc/mm/book3s64/radix_pgtable.c | 12 arch/power

[RFC v1 08/10] book3s64/hash: Add kfence functionality

2024-07-31 Thread Ritesh Harjani (IBM)
= 32MB) 4. The hash slot information for kfence memory gets added in linear map in hash_linear_map_add_slot() (which also adds for debug_pagealloc). Reported-by: Pavithra Prakash Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h | 5 - arch/powerpc/mm/book3s64/has

[RFC v1 07/10] book3s64/hash: Disable debug_pagealloc if it requires more memory

2024-07-31 Thread Ritesh Harjani (IBM)
Make size of the linear map to be allocated in RMA region to be of ppc64_rma_size / 4. If debug_pagealloc requires more memory than that then do not allocate any memory and disable debug_pagealloc. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 15

[RFC v1 06/10] book3s64/hash: Make kernel_map_linear_page() generic

2024-07-31 Thread Ritesh Harjani (IBM)
arate out kfence from debug_pagealloc infrastructure. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 47 ++- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/boo

[RFC v1 05/10] book3s64/hash: Refactor hash__kernel_map_pages() function

2024-07-31 Thread Ritesh Harjani (IBM)
This refactors hash__kernel_map_pages() function to call hash_debug_pagealloc_map_pages(). This will come useful when we will add kfence support. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 9 - 1 file changed

[RFC v1 04/10] book3s64/hash: Add hash_debug_pagealloc_alloc_slots() function

2024-07-31 Thread Ritesh Harjani (IBM)
linear_map_hash_slots and linear_map_hash_count variables under the same config too. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 29 --- 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c

[RFC v1 03/10] book3s64/hash: Add hash_debug_pagealloc_add_slot() function

2024-07-31 Thread Ritesh Harjani (IBM)
This adds hash_debug_pagealloc_add_slot() function instead of open coding that in htab_bolt_mapping(). This is required since we will be separating kfence functionality to not depend upon debug_pagealloc. No functionality change in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch

[RFC v1 02/10] book3s64/hash: Refactor kernel linear map related calls

2024-07-31 Thread Ritesh Harjani (IBM)
This just brings all linear map related handling at one place instead of having those functions scattered in hash_utils file. Makes it easy for review. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 164

[RFC v1 01/10] book3s64/hash: Remove kfence support temporarily

2024-07-31 Thread Ritesh Harjani (IBM)
eeds some refactoring. We will bring in kfence on Hash support in later patches. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/include/asm/kfence.h | 5 + arch/powerpc/mm/book3s64/hash_utils.c | 16 +++- 2 files changed, 16 insertions(+), 5 deletions(-) diff --git a/arc

[RFC v1 00/10] book3s64/hash: Improve kfence support

2024-07-31 Thread Ritesh Harjani (IBM)
cases, which makes it not really suitable to be enabled by default on production kernels on Hash. This is because on P8 book3s64, we don't support mapping multiple pagesizes (MPSS) within the kernel linear map segment. Is this understanding correct? Ritesh Harjani (IBM) (10): book3

[PATCH] powerpc/ptdump: Fix walk_vmemmap to also print first vmemmap entry

2024-04-17 Thread Ritesh Harjani (IBM)
walk_vmemmap() was skipping the first vmemmap entry pointed by vmemmap_list pointer itself. This patch fixes that. With this we should see the vmemmap entry at 0xc00c for hash which wasn't getting printed on doing "cat /sys/kernel/debug/kernel_hash_pagetable" Signed

Re: [PATCH 2/4] fs: define a firmware security filesystem named fwsecurityfs

2022-11-19 Thread Ritesh Harjani (IBM)
Hello Nayna, On 22/11/09 03:10PM, Nayna wrote: > > On 11/9/22 08:46, Greg Kroah-Hartman wrote: > > On Sun, Nov 06, 2022 at 04:07:42PM -0500, Nayna Jain wrote: > > > securityfs is meant for Linux security subsystems to expose policies/logs > > > or any other information. However, there are variou

Re: [PATCH v2 1/4] Make place for common balloon code

2022-08-17 Thread Ritesh Harjani
On 22/08/16 12:41PM, Alexander Atanasov wrote: > File already contains code that is common along balloon > drivers so rename it to reflect its contents. > mm/balloon_compaction.c -> mm/balloon_common.c > > Signed-off-by: Alexander Atanasov > --- > MAINTAINERS

Re: [PATCH v2] of: check previous kernel's ima-kexec-buffer against memory bounds

2022-05-24 Thread Ritesh Harjani
Just a minor nit which I noticed. On 22/05/24 11:20AM, Vaibhav Jain wrote: > Presently ima_get_kexec_buffer() doesn't check if the previous kernel's > ima-kexec-buffer lies outside the addressable memory range. This can result > in a kernel panic if the new kernel is booted with 'mem=X' arg and

Re: [powerpc]Kernel crash while running LTP (execveat03) [next-20220315]

2022-03-16 Thread Ritesh Harjani
On 22/03/16 01:47PM, Sachin Sant wrote: > While running LTP tests(execveat03) against 5.17.0-rc8-next-20220315 > On a POWER10 LPAR following crash is seen: > > [ 945.659049] dummy_del_mod: loading out-of-tree module taints kernel. > [ 945.659951] dummy_del_mod: module verification failed: signatu

Re: [powerpc] Warning mm/slub.c:3246 during boot (next-20220210) w/ext4

2022-02-10 Thread Ritesh Harjani
On 22/02/10 06:57PM, Sachin Sant wrote: > While booting 5.17.0-rc3-next-20220210 on Power following warning > is seen: > > [ 32.626501] EXT4-fs (sda2): re-mounted. Quota mode: none. > [ 32.627225] [ cut here ] > [ 32.627236] WARNING: CPU: 27 PID: 1084 at mm/slub.c:3246

Re: [PATCHv2] selftests/powerpc/copyloops: Add memmove_64 test

2022-02-07 Thread Ritesh Harjani
[1]: https://lore.kernel.org/all/87sfybl5f9@mpe.ellerman.id.au/ -ritesh On 21/09/13 11:47AM, Ritesh Harjani wrote: > While debugging an issue, we wanted to check whether the arch specific > kernel memmove implementation is correct. > This selftest could help test that. > > Suggeste

[PATCHv2] selftests/powerpc/copyloops: Add memmove_64 test

2021-09-12 Thread Ritesh Harjani
While debugging an issue, we wanted to check whether the arch specific kernel memmove implementation is correct. This selftest could help test that. Suggested-by: Aneesh Kumar K.V Suggested-by: Vaibhav Jain Signed-off-by: Ritesh Harjani --- v1 -> v2: Integrated memmove_64 test within copylo

Re: [PATCH 1/1] selftests/powerpc: Add memmove_64 test

2021-09-12 Thread Ritesh Harjani
On 21/09/11 09:26PM, Michael Ellerman wrote: > Ritesh Harjani writes: > > While debugging an issue, we wanted to check whether the arch specific > > kernel memmove implementation is correct. This selftest could help test > > that. > > > > Suggested-by: Aneesh Kum

Re: [PATCH 1/1] selftests/powerpc: Add memmove_64 test

2021-09-09 Thread Ritesh Harjani
Gentle ping! -ritesh

[PATCH 1/1] selftests/powerpc: Add memmove_64 test

2021-08-18 Thread Ritesh Harjani
While debugging an issue, we wanted to check whether the arch specific kernel memmove implementation is correct. This selftest could help test that. Suggested-by: Aneesh Kumar K.V Suggested-by: Vaibhav Jain Signed-off-by: Ritesh Harjani --- tools/testing/selftests/powerpc/Makefile | 1