n the shared link, indeed had an unmet
dependency. i.e.
CONFIG_PPC_64S_HASH_MMU=y
# CONFIG_PPC_RADIX_MMU is not set
CONFIG_PPC_RADIX_BROADCAST_TLBIE=y
So, the fix look good to me. Please feel free to take:
Reviewed-by: Ritesh Harjani (IBM)
> ---
> arch/powerpc/platforms/powernv/Kconfig | 2 +-
+linux-btrfs
Venkat Rao Bagalkote writes:
> Greetings!!!
>
>
> I am observing Kernel oops while running brtfs/108 TC on IBM Power System.
>
> Repo: Linux-Next (next-20250320)
Looks like this next tag had many btrfs related changes -
https://web.git.kernel.org/pub/scm/linux/kernel/git/next/lin
n [1].
But looks good otherwise. With that addressed in the commit message,
please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
>
> arch/powerpc/kvm/powerpc.c | 5 +
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kvm/powerpc.c b/
> arch/powerpc/sysdev/xics/icp-native.c | 21 -
> 2 files changed, 22 deletions(-)
Indeed there are no callers left of this function. Great catch!
Looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
Amit Machhiwal writes:
> Currently, on book3s-hv, the capability KVM_CAP_SPAPR_TCE_VFIO is only
> available for KVM Guests running on PowerNV and not for the KVM guests
> running on pSeries hypervisors.
IIUC it was said here [1] that this capability is not available on
pSeries, hence it got rem
an this up and consolidate the common header definitions
into pkeys.h header file. The changes looks good to me. Please feel free
to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
series as well for the callers to know, whether the eeh recovery is
completed.
This looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
Narayana Murty N writes:
> The PE Reset State "0" obtained from RTAS calls
> ibm_read_slot_reset_[state|state2] indicates that
> the Reset is deactivated and the PE is not in the MMIO
> Stopped or DMA Stopped state.
>
> With PE Reset State "0", the MMIO and DMA is allowed for
> the PE.
Looking a
let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem() to catch such wrong usages.
Acked-by: David Hildenbrand
Acked-by: Zi Yan
Reviewed-by: Anshuman Khandual
Signed-off-by: Ritesh Harjani (IBM)
---
RFCv3 -> v4:
1. Dropped RFC tagged as requested by Andrew.
2. Upd
erpc/kernel/prom.c
> @@ -908,6 +908,9 @@ void __init early_init_devtree(void *params)
>
> mmu_early_init_devtree();
>
> + /* Setup param area for passing additional parameters to fadump capture
> kernel. */
> + fadump_setup_param_area();
> +
Maybe we should add
Sourabh Jain writes:
> The param area is a memory region where the kernel places additional
> command-line arguments for fadump kernel. Currently, the param memory
> area is reserved in fadump kernel if it is above boot_mem_top. However,
> it should be reserved if it is below boot_mem_top because
> pending
> + * external interrupts. Hence, explicity mask off MER
> bit
> + * here as otherwise it may generate spurious
> interrupts in L2 KVM
> + * causing an endless loop, which results in L2 guest
> g
Gautam Menghani writes:
> Mask off the LPCR_MER bit before running a vCPU to ensure that it is not
> set if there are no pending interrupts. Running a vCPU with LPCR_MER bit
> set and no pending interrupts results in L2 vCPU getting an infinite flood
> of spurious interrupts. The 'if check' in kv
ned-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/fault.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 81c77ddce2e3..316f5162ffc4 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
unmapped address from kfence pool.
Let's add a testcase to cover this case.
Co-developed-by: Ritesh Harjani (IBM)
Signed-off-by: Nirjhar Roy
Signed-off-by: Ritesh Harjani (IBM)
---
Will be nice if we can get some feedback on this.
v2 -> v3:
=
1. Separated out this kfence kunit t
_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand
Reported-by: Sachin P Bappalige
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/fad
decide
linear map pagesize if hash supports either debug_pagealloc or
kfence.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 8 ++--
arch/powerpc/mm/book3s64/pgtable.c | 13 +
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 558d6f5202b9..2f5dd6310a8f 10
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
kunit testcase patch-1.
2. Fixed a false negative with copy_from_kernel_nofault() in patch-2.
3. Addressed review comments from Christophe Leroy.
4. Added patch-13.
Ritesh Harjani (IBM) (12):
powerpc: mm/fault: Fix kfence page fault reporting
book3s64/hash: Remove kfence support temporarily
boo
s false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
v3 -> v4
=
1. Dropped RFC tag.
2. Updated commit subject from fadump: <>
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 34 ++
1 file changed, 22 insertions(+), 12
decide
linear map pagesize if hash supports either debug_pagealloc or
kfence.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 53e6f3a524eb..b6da25719e37 10
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 8 ++--
arch/powerpc/mm/book3s64/pgtable.c | 13 +
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
PPC32")
Reported-by: Disha Goel
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/fault.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 81c77ddce2e3..fa825198f29f 100644
--- a/arch/powerpc/mm/fault.c
unmapped address from kfence pool.
Let's add a testcase to cover this case.
Co-developed-by: Ritesh Harjani (IBM)
Signed-off-by: Ritesh Harjani (IBM)
Signed-off-by: Nirjhar Roy
Cc: kasan-...@googlegroups.com
Cc: Alexander Potapenko
Cc: linux...@kvack.org
---
mm/kfence/kfence_test.c
d a kunit testcase patch-1.
2. Fixed a false negative with copy_from_kernel_nofault() in patch-2.
3. Addressed review comments from Christophe Leroy.
4. Added patch-13.
Nirjhar Roy (1):
mm/kfence: Add a new kunit test test_use_after_free_read_nofault()
Ritesh Harjani (IBM) (12):
powerpc:
om_kernel_user_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand
Reported-by: Sachin P Bappalige
Acked-by: Hari Bathini
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/fadump.h | 7 +++
arch/powerp
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain
Acked-by: Hari Bathini
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 34 ++
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc
s false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Acked-by: Hari Bathini
Signed-off-by: Ritesh Harjani (IBM)
---
v2 -> v3: Separated the series into 2 as discussed in v2.
[v2]:
https://lore.kernel.org/linuxppc-dev/cover.1728585512
let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem().
Acked-by: David Hildenbrand
Signed-off-by: Ritesh Harjani (IBM)
---
v2 -> v3: Separated the series into 2 as discussed in v2.
[v2]:
https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.l...@gmail.c
om_kernel_user_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand
Reported-by: Sachin P Bappalige
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/fadump.h | 7 +++
arch/powerpc/kernel/fadump.c |
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 34 ++
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch
s false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/fadum
let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem().
Signed-off-by: Ritesh Harjani (IBM)
---
mm/cma.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/mm/cma.c b/mm/cma.c
index 3e9724716bad..36d753e7a0bf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -182,6 +1
ed.
[v1]:
https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.l...@gmail.com/
Ritesh Harjani (IBM) (4):
cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
fadump: Refactor and prepare fadump_cma_init for late init
fadump: Rese
ernel code, 4544K rwdata, 17280K
rodata, 9216K init, 2212K bss, 218432K reserved, 4210688K cma-reserved)
Reported-by: Sourabh Jain
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/
ee_unref_page_commit+0x3d4/0x4e4
free_unref_page+0x458/0x6d0
init_cma_reserved_pageblock+0x114/0x198
cma_init_reserved_areas+0x270/0x3e0
do_one_initcall+0x80/0x2f8
kernel_init_freeable+0x33c/0x530
kernel_init+0x34/0x26c
ret_from_kernel_user_thread+0x14/0x1c
Reported-by: Sachin P Bappalige
Si
=== NOT FOR MERGE YET ===
This adds the support for ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH.
More details are added to the cover letter.
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/book3s/64/tlbflush.h | 5 +++
arch/powerpc/include/asm/tlbbatch.h | 14
(void)p[i];
}
/* swap out */
madvise(p, SIZE, MADV_PAGEOUT);
}
}
Ritesh Harjani (IBM) (1):
powerpc: Add support for batched unmap TLB flush
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/book3s/64
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 53e6f3a524eb..b6da25719e37 10
decide
linear map pagesize if hash supports either debug_pagealloc or
kfence.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 8 ++--
arch/powerpc/mm/book3s64/pgtable.c | 13 +
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
PPC32")
Reported-by: Disha Goel
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/fault.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 81c77ddce2e3..fa825198f29f 100644
--- a/arch/powerpc/mm/fault.c
unmapped address from kfence pool.
Let's add a testcase to cover this case.
Co-developed-by: Ritesh Harjani (IBM)
Signed-off-by: Ritesh Harjani (IBM)
Signed-off-by: Nirjhar Roy
Cc: kasan-...@googlegroups.com
Cc: Alexander Potapenko
Cc: linux...@kvack.org
---
mm/kfence/kfence_test.c
ather than 16MB mapping.
v1 -> v2:
=
1. Added a kunit testcase patch-1.
2. Fixed a false negative with copy_from_kernel_nofault() in patch-2.
3. Addressed review comments from Christophe Leroy.
4. Added patch-13.
Nirjhar Roy (1):
mm/kfence: Add a new kunit test test_use_after_free_read
Use pr_warn_once instead of WARN_ON_ONCE as discussed here [1]
for printing possible use of set_memory_* on linear map on Hash.
[1]: https://lore.kernel.org/all/877cc2fpi2.fsf@mail.lhotse/#t
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/pageattr.c | 5 -
1 file changed, 4
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index c66b9921fc7d..759dbcbf1
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 2 ++
arch/powerpc/mm/book3s64/radix_pgtable.c | 12
arch/power
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
cases, which makes it not
really suitable to be enabled by default on production kernels on Hash.
This is because on P8 book3s64, we don't support mapping multiple pagesizes
(MPSS) within the kernel linear map segment. Is this understanding correct?
Ritesh Harjani (IBM) (10):
book3
walk_vmemmap() was skipping the first vmemmap entry pointed by
vmemmap_list pointer itself. This patch fixes that.
With this we should see the vmemmap entry at 0xc00c for hash
which wasn't getting printed on doing
"cat /sys/kernel/debug/kernel_hash_pagetable"
Signed
Hello Nayna,
On 22/11/09 03:10PM, Nayna wrote:
>
> On 11/9/22 08:46, Greg Kroah-Hartman wrote:
> > On Sun, Nov 06, 2022 at 04:07:42PM -0500, Nayna Jain wrote:
> > > securityfs is meant for Linux security subsystems to expose policies/logs
> > > or any other information. However, there are variou
On 22/08/16 12:41PM, Alexander Atanasov wrote:
> File already contains code that is common along balloon
> drivers so rename it to reflect its contents.
> mm/balloon_compaction.c -> mm/balloon_common.c
>
> Signed-off-by: Alexander Atanasov
> ---
> MAINTAINERS
Just a minor nit which I noticed.
On 22/05/24 11:20AM, Vaibhav Jain wrote:
> Presently ima_get_kexec_buffer() doesn't check if the previous kernel's
> ima-kexec-buffer lies outside the addressable memory range. This can result
> in a kernel panic if the new kernel is booted with 'mem=X' arg and
On 22/03/16 01:47PM, Sachin Sant wrote:
> While running LTP tests(execveat03) against 5.17.0-rc8-next-20220315
> On a POWER10 LPAR following crash is seen:
>
> [ 945.659049] dummy_del_mod: loading out-of-tree module taints kernel.
> [ 945.659951] dummy_del_mod: module verification failed: signatu
On 22/02/10 06:57PM, Sachin Sant wrote:
> While booting 5.17.0-rc3-next-20220210 on Power following warning
> is seen:
>
> [ 32.626501] EXT4-fs (sda2): re-mounted. Quota mode: none.
> [ 32.627225] [ cut here ]
> [ 32.627236] WARNING: CPU: 27 PID: 1084 at mm/slub.c:3246
[1]: https://lore.kernel.org/all/87sfybl5f9@mpe.ellerman.id.au/
-ritesh
On 21/09/13 11:47AM, Ritesh Harjani wrote:
> While debugging an issue, we wanted to check whether the arch specific
> kernel memmove implementation is correct.
> This selftest could help test that.
>
> Suggeste
While debugging an issue, we wanted to check whether the arch specific
kernel memmove implementation is correct.
This selftest could help test that.
Suggested-by: Aneesh Kumar K.V
Suggested-by: Vaibhav Jain
Signed-off-by: Ritesh Harjani
---
v1 -> v2: Integrated memmove_64 test within copylo
On 21/09/11 09:26PM, Michael Ellerman wrote:
> Ritesh Harjani writes:
> > While debugging an issue, we wanted to check whether the arch specific
> > kernel memmove implementation is correct. This selftest could help test
> > that.
> >
> > Suggested-by: Aneesh Kum
Gentle ping!
-ritesh
While debugging an issue, we wanted to check whether the arch specific
kernel memmove implementation is correct. This selftest could help test that.
Suggested-by: Aneesh Kumar K.V
Suggested-by: Vaibhav Jain
Signed-off-by: Ritesh Harjani
---
tools/testing/selftests/powerpc/Makefile | 1
94 matches
Mail list logo