if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 53e6f3a524eb..b6da25719e37 10
d a kunit testcase patch-1.
2. Fixed a false negative with copy_from_kernel_nofault() in patch-2.
3. Addressed review comments from Christophe Leroy.
4. Added patch-13.
Nirjhar Roy (1):
mm/kfence: Add a new kunit test test_use_after_free_read_nofault()
Ritesh Harjani (IBM) (12):
powerpc:
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
unmapped address from kfence pool.
Let's add a testcase to cover this case.
Co-developed-by: Ritesh Harjani (IBM)
Signed-off-by: Ritesh Harjani (IBM)
Signed-off-by: Nirjhar Roy
Cc: kasan-...@googlegroups.com
Cc: Alexander Potapenko
Cc: linux...@kvack.org
---
mm/kfence/kfence_test.c
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
PPC32")
Reported-by: Disha Goel
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/fault.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 81c77ddce2e3..fa825198f29f 100644
--- a/arch/powerpc/mm/fault.c
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
ee_unref_page_commit+0x3d4/0x4e4
free_unref_page+0x458/0x6d0
init_cma_reserved_pageblock+0x114/0x198
cma_init_reserved_areas+0x270/0x3e0
do_one_initcall+0x80/0x2f8
kernel_init_freeable+0x33c/0x530
kernel_init+0x34/0x26c
ret_from_kernel_user_thread+0x14/0x1c
Reported-by: Sachin P Bappalige
Si
ernel code, 4544K rwdata, 17280K
rodata, 9216K init, 2212K bss, 218432K reserved, 4210688K cma-reserved)
Reported-by: Sourabh Jain
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/
s false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/fadum
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 34 ++
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch
om_kernel_user_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand
Reported-by: Sachin P Bappalige
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/fadump.h | 7 +++
arch/powerpc/kernel/fadump.c |
let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem().
Signed-off-by: Ritesh Harjani (IBM)
---
mm/cma.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/mm/cma.c b/mm/cma.c
index 3e9724716bad..36d753e7a0bf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -182,6 +1
ed.
[v1]:
https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.l...@gmail.com/
Ritesh Harjani (IBM) (4):
cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
fadump: Refactor and prepare fadump_cma_init for late init
fadump: Rese
decide
linear map pagesize if hash supports either debug_pagealloc or
kfence.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch
_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand
Reported-by: Sachin P Bappalige
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/fad
unmapped address from kfence pool.
Let's add a testcase to cover this case.
Co-developed-by: Ritesh Harjani (IBM)
Signed-off-by: Nirjhar Roy
Signed-off-by: Ritesh Harjani (IBM)
---
Will be nice if we can get some feedback on this.
v2 -> v3:
=
1. Separated out this kfence kunit t
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
kunit testcase patch-1.
2. Fixed a false negative with copy_from_kernel_nofault() in patch-2.
3. Addressed review comments from Christophe Leroy.
4. Added patch-13.
Ritesh Harjani (IBM) (12):
powerpc: mm/fault: Fix kfence page fault reporting
book3s64/hash: Remove kfence support temporarily
boo
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 558d6f5202b9..2f5dd6310a8f 10
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 8 ++--
arch/powerpc/mm/book3s64/pgtable.c | 13 +
ned-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/fault.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 81c77ddce2e3..316f5162ffc4 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
s false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
v3 -> v4
=
1. Dropped RFC tag.
2. Updated commit subject from fadump: <>
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 34 ++
1 file changed, 22 insertions(+), 12
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain
Acked-by: Hari Bathini
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 34 ++
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc
om_kernel_user_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand
Reported-by: Sachin P Bappalige
Acked-by: Hari Bathini
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/fadump.h | 7 +++
arch/powerp
let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem().
Acked-by: David Hildenbrand
Signed-off-by: Ritesh Harjani (IBM)
---
v2 -> v3: Separated the series into 2 as discussed in v2.
[v2]:
https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.l...@gmail.c
s false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Acked-by: Hari Bathini
Signed-off-by: Ritesh Harjani (IBM)
---
v2 -> v3: Separated the series into 2 as discussed in v2.
[v2]:
https://lore.kernel.org/linuxppc-dev/cover.1728585512
Gautam Menghani writes:
> Mask off the LPCR_MER bit before running a vCPU to ensure that it is not
> set if there are no pending interrupts. Running a vCPU with LPCR_MER bit
> set and no pending interrupts results in L2 vCPU getting an infinite flood
> of spurious interrupts. The 'if check' in kv
erpc/kernel/prom.c
> @@ -908,6 +908,9 @@ void __init early_init_devtree(void *params)
>
> mmu_early_init_devtree();
>
> + /* Setup param area for passing additional parameters to fadump capture
> kernel. */
> + fadump_setup_param_area();
> +
Maybe we should add
Sourabh Jain writes:
> The param area is a memory region where the kernel places additional
> command-line arguments for fadump kernel. Currently, the param memory
> area is reserved in fadump kernel if it is above boot_mem_top. However,
> it should be reserved if it is below boot_mem_top because
let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem() to catch such wrong usages.
Acked-by: David Hildenbrand
Acked-by: Zi Yan
Reviewed-by: Anshuman Khandual
Signed-off-by: Ritesh Harjani (IBM)
---
RFCv3 -> v4:
1. Dropped RFC tagged as requested by Andrew.
2. Upd
unmapped address from kfence pool.
Let's add a testcase to cover this case.
Co-developed-by: Ritesh Harjani (IBM)
Signed-off-by: Ritesh Harjani (IBM)
Signed-off-by: Nirjhar Roy
Cc: kasan-...@googlegroups.com
Cc: Alexander Potapenko
Cc: linux...@kvack.org
---
mm/kfence/kfence_test.c
_nofault()
Ritesh Harjani (IBM) (12):
powerpc: mm: Fix kfence page fault reporting
book3s64/hash: Remove kfence support temporarily
book3s64/hash: Refactor kernel linear map related calls
book3s64/hash: Add hash_debug_pagealloc_add_slot() function
book3s64/hash: Add hash_debug_pageal
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
PPC32")
Reported-by: Disha Goel
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/fault.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 81c77ddce2e3..fa825198f29f 100644
--- a/arch/powerpc/mm/fault.c
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 8 ++--
arch/powerpc/mm/book3s64/pgtable.c | 13 +
decide
linear map pagesize if hash supports either debug_pagealloc or
kfence.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 53e6f3a524eb..b6da25719e37 10
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
lude/asm/tlbbatch.h
b/arch/powerpc/include/asm/tlbbatch.h
new file mode 100644
index ..fa738462a242
--- /dev/null
+++ b/arch/powerpc/include/asm/tlbbatch.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2024 IBM Corporation.
+ */
+#ifndef _ASM_POWERPC_TL
(void)p[i];
}
/* swap out */
madvise(p, SIZE, MADV_PAGEOUT);
}
}
Ritesh Harjani (IBM) (1):
powerpc: Add support for batched unmap TLB flush
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/book3s/64
> pending
> + * external interrupts. Hence, explicity mask off MER
> bit
> + * here as otherwise it may generate spurious
> interrupts in L2 KVM
> + * causing an endless loop, which results in L2 guest
> g
Narayana Murty N writes:
> The PE Reset State "0" obtained from RTAS calls
> ibm_read_slot_reset_[state|state2] indicates that
> the Reset is deactivated and the PE is not in the MMIO
> Stopped or DMA Stopped state.
>
> With PE Reset State "0", the MMIO and DMA is allowed for
> the PE.
Looking a
an this up and consolidate the common header definitions
into pkeys.h header file. The changes looks good to me. Please feel free
to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
series as well for the callers to know, whether the eeh recovery is
completed.
This looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
> arch/powerpc/sysdev/xics/icp-native.c | 21 -
> 2 files changed, 22 deletions(-)
Indeed there are no callers left of this function. Great catch!
Looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
Amit Machhiwal writes:
> Currently, on book3s-hv, the capability KVM_CAP_SPAPR_TCE_VFIO is only
> available for KVM Guests running on PowerNV and not for the KVM guests
> running on pSeries hypervisors.
IIUC it was said here [1] that this capability is not available on
pSeries, hence it got rem
n [1].
But looks good otherwise. With that addressed in the commit message,
please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
>
> arch/powerpc/kvm/powerpc.c | 5 +
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kvm/powerpc.c b/
+linux-btrfs
Venkat Rao Bagalkote writes:
> Greetings!!!
>
>
> I am observing Kernel oops while running brtfs/108 TC on IBM Power System.
>
> Repo: Linux-Next (next-20250320)
Looks like this next tag had many btrfs related changes -
https://web.git.kernel.org/pub/scm/lin
n the shared link, indeed had an unmet
dependency. i.e.
CONFIG_PPC_64S_HASH_MMU=y
# CONFIG_PPC_RADIX_MMU is not set
CONFIG_PPC_RADIX_BROADCAST_TLBIE=y
So, the fix look good to me. Please feel free to take:
Reviewed-by: Ritesh Harjani (IBM)
> ---
> arch/powerpc/platforms/powernv/Kconfig | 2 +-
tests+0x1b4/0x334
[c4a2fa40] [c206db34] debug_vm_pgtable+0xcbc/0x1c48
[c4a2fc10] [c000fd28] do_one_initcall+0x60/0x388
Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
Signed-off-by: Aneesh Kumar K.V (IBM)
---
mm/debug_v
. This alignment value will work for both
hash and radix translations.
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/prom.c | 7 +--
arch/powerpc/kernel/prom_init.c | 4 ++--
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/prom.c b/arch
.
Cc: Mahesh Salgaonkar
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/fadump.c | 16
1 file changed, 16 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index d14eda1e8589..4e768d93c6d4 100644
--- a/arch/powerpc/kernel
s will see the new aligned value of the memory limit.
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/prom.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 7451bedad1f4..b8f764453eaa 100644
101 - 171 of 171 matches
Mail list logo