: Aneesh Kumar K.V (IBM)
> Signed-off-by: Nathan Lynch
> ---
> arch/powerpc/kernel/rtas.c | 9 +++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
> index eddc031c4b95..1ad1869e2e96 100644
>
+0x1c/0x20
>
> (This is preceded by a warning for the failed lookup in
> rtas_token_to_function().)
>
> This happens when __do_enter_rtas_trace() attempts a token to function
> descriptor lookup before the xarray containing the mappings has been
> set up.
>
> Fall back to linear sca
ing the formatting.
>
Reviewed-by: Aneesh Kumar K.V (IBM)
> Signed-off-by: Nathan Lynch
> ---
> arch/powerpc/include/asm/rtas.h | 25 +++--
> 1 file changed, 19 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/rtas.h b/
Nathan Lynch via B4 Relay
writes:
> From: Nathan Lynch
>
> Move the function descriptor table lookup out of rtas_function_token()
> into a separate routine for use in new code to follow. No functional
> change.
>
Reviewed-by: Aneesh Kumar K.V (IBM)
> Signed
kernel is concerned. User space is responsible for appropriately
> serializing its call sequences. (Whether user space code actually
> takes measures to prevent sequence interleaving is another matter.)
> Examples of such functions currently include ibm,platform-dump and
> ibm,get-vpd.
>
Nathan Lynch via B4 Relay
writes:
> From: Nathan Lynch
>
> Use the function lock API to prevent interleaving call sequences of
> the ibm,activate-firmware RTAS function, which typically requires
> multiple calls to complete the update. While the spec does not
> specifically pr
t; isn't satisfied.
>
> __do_enter_rtas_trace() gets reorganized a bit as a result of
> performing the function descriptor lookup unconditionally now.
>
Reviewed-by: Aneesh Kumar K.V (IBM)
> Signed-off-by: Nathan Lynch
> ---
> arch/powerpc/kernel/rtas.c | 21 +---
"Nicholas Piggin" writes:
> On Tue Nov 21, 2023 at 9:23 AM AEST, Masahiro Yamada wrote:
>> crtsavres.o is linked to modules. However, as explained in commit
>> d0e628cd817f ("kbuild: doc: clarify the difference between extra-y
>> and always-y"), 'make modules' does not build extra-y.
>>
>> For ex
"Aneesh Kumar K.V" writes:
> Arm disabled hugetlb vmemmap optimization [1] because hugetlb vmemmap
> optimization includes an update of both the permissions (writeable to
> read-only) and the output address (pfn) of the vmemmap ptes. That is not
> supported without unmapping of pte(marking it inv
"Aneesh Kumar K.V" writes:
These are just some minor nits in case you are going to send another
revision.
> This is enabled only with radix translation and 1G hugepage size. This will
> be used with devdax device memory with a namespace alignment of 1G.
>
> Anon transparent hugepage is not suppo
"Aneesh Kumar K.V" writes:
> This is in preparation to update radix to implement vmemmap optimization
> for devdax. Below are the rules w.r.t radix vmemmap mapping
>
> 1. First try to map things using PMD (2M)
> 2. With altmap if altmap cross-boundary check returns true, fall back to
>PAGE_SI
Peter Xu writes:
> On Thu, Nov 23, 2023 at 06:22:33PM +, Christophe Leroy wrote:
>> > For fast-gup I think the hugepd code is in use, however for walk_page_*
>> > apis hugepd code shouldn't be reached iiuc as we have the hugetlb specific
>> > handling (walk_hugetlb_range()), so anything withi
Nathan Lynch writes:
> "Aneesh Kumar K.V (IBM)" writes:
>> Nathan Lynch via B4 Relay
>> writes:
>>
>>>
>>> Use the function lock API to prevent interleaving call sequences of
>>> the ibm,activate-firmware RTAS function, which typically
Michael Ellerman writes:
> "Aneesh Kumar K.V" writes:
>> There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite.
>> But that got dropped by
>> commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to
>> replace savedwrite")
>>
>> With the change in this patch num
Haren Myneni writes:
> VAS allocate, modify and deallocate HCALLs returns
> H_LONG_BUSY_ORDER_1_MSEC or H_LONG_BUSY_ORDER_10_MSEC for busy
> delay and expects OS to reissue HCALL after that delay. But using
> msleep() will often sleep at least 20 msecs even though the
> hypervisor suggests OS rei
Vaibhav Jain writes:
> From: Jordan Niethe
>
> An L0 must invalidate the L2's RPT during H_GUEST_DELETE if this has not
> already been done. This is a slow operation that means H_GUEST_DELETE
> must return H_BUSY multiple times before completing. Invalidating the
> tables before deleting the gue
Vaibhav Jain writes:
> From: Jordan Niethe
>
> H_COPY_TOFROM_GUEST is part of the nestedv1 API and so should not be
> called by a nestedv2 host. Do not attempt to call it.
>
May be we should use
firmware_has_feature(FW_FEATURE_H_COPY_TOFROM_GUEST))?
the nestedv2 can end up using the above hcal
Michael Ellerman writes:
> Aneesh and Naveen are helping out with some aspects of upstream
> maintenance, add them as reviewers.
>
Acked-by: Aneesh Kumar K.V (IBM)
> Signed-off-by: Michael Ellerman
> ---
> MAINTAINERS | 2 ++
> 1 file changed, 2 insertions(+)
>
>
Hari Bathini writes:
> When KFENCE is enabled, total system memory is mapped at page level
> granularity. But in radix MMU mode, ~3GB additional memory is needed
> to map 100GB of system memory at page level granularity when compared
> to using 2MB direct mapping. This is not desired considering
Hari Bathini writes:
> With commit b33f778bba5ef ("kfence: alloc kfence_pool after system
> startup"), KFENCE pool can be allocated after system startup via the
> page allocator. This can lead to problems as all memory is not mapped
> at page granularity anymore with CONFIG_KFENCE. Address this b
dump.o
> obj-$(CONFIG_IO_EVENT_IRQ) += io_event_irq.o
> obj-$(CONFIG_LPARCFG)+= lparcfg.o
> obj-$(CONFIG_IBMVIO) += vio.o
> diff --git a/arch/powerpc/platforms/pseries/htmdump.c
> b/arch/powerpc/platforms/pseries/htmdump.c
> new file mode 100644
> i
Madhavan Srinivasan writes:
> Add documentation to 'papr_hcalls.rst' describing the
> input, output and return values of the H_HTM hcall as
> per the internal specification.
>
> Signed-off-by: Madhavan Srinivasan
> ---
> Documentation/arch/powerpc/papr_hcalls.rst | 11 +++
> 1 file chan
Luis Chamberlain writes:
> On Mon, Aug 26, 2024 at 02:10:49PM -0700, Darrick J. Wong wrote:
>> On Mon, Aug 26, 2024 at 01:52:54PM -0700, Luis Chamberlain wrote:
>> > On Mon, Aug 26, 2024 at 07:43:20PM +0200, Christophe Leroy wrote:
>> > >
>> > >
>> > > Le 26/08/2024 à 17:48, Pankaj Raghav (Sams
Christophe Leroy writes:
> Le 27/08/2024 à 11:12, Ritesh Harjani (IBM) a écrit :
>> [Vous ne recevez pas souvent de courriers de ritesh.l...@gmail.com.
>> Découvrez pourquoi ceci est important à
>> https://aka.ms/LearnAboutSenderIdentification ]
>>
>> Use pr_
Sorry for the delayed response. Was pulled into something else.
Christophe Leroy writes:
> Le 31/07/2024 à 09:56, Ritesh Harjani (IBM) a écrit :
>> [Vous ne recevez pas souvent de courriers de ritesh.l...@gmail.com.
>> Découvrez pourquoi ceci est important à
&g
Christophe Leroy writes:
> Le 31/07/2024 à 09:56, Ritesh Harjani (IBM) a écrit :
>> [Vous ne recevez pas souvent de courriers de ritesh.l...@gmail.com.
>> Découvrez pourquoi ceci est important à
>> https://aka.ms/LearnAboutSenderIdentification ]
>>
>> Both rad
Christophe Leroy writes:
> Le 31/07/2024 à 09:56, Ritesh Harjani (IBM) a écrit :
>> [Vous ne recevez pas souvent de courriers de ritesh.l...@gmail.com.
>> Découvrez pourquoi ceci est important à
>> https://aka.ms/LearnAboutSenderIdentification ]
>>
>> Enable
Madhavan Srinivasan writes:
> On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
>> We anyway don't use any return values from fadump_cma_init(). Since
>> fadump_reserve_mem() from where fadump_cma_init() gets called today,
>> already has the required checks.
>>
Christophe Leroy writes:
> Le 15/10/2024 à 03:33, Ritesh Harjani (IBM) a écrit :
>> copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
>> /proc/kcore can have some unmapped kfence objects which when read via
>> copy_from_kernel_nofault() can cau
Madhavan Srinivasan writes:
>
> Patchset looks fine to me.
>
> Reviewed-by: Madhavan Srinivasan for the series.
>
Thanks Maddy for the reviews!
I will spin PATCH v4 with these minor suggested changes (No code changes)
-ritesh
Michael Ellerman writes:
> Hi Ritesh,
>
> "Ritesh Harjani (IBM)" writes:
>> copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
>> /proc/kcore can have some unmapped kfence objects which when read via
>> copy_from_kernel_nofault() c
David Hildenbrand writes:
> On 08.10.24 15:27, Ritesh Harjani (IBM) wrote:
>> During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
>> since pageblock_order is still zero and it gets initialized
>> later during paging_init() e.g.
>> paging_init() -> free_area
the path after initialization.
>
> tools/testing/selftests/powerpc/mm/tlbie_test.c | 10 +-
> 1 file changed, 5 insertions(+), 5 deletions(-)
Thanks for the fix. Looks good to me.
Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
Michael Ellerman writes:
> "Ritesh Harjani (IBM)" writes:
>> Please find the v2 of cma related powerpc fadump fixes.
>>
>> Patch-1 is a change in mm/cma.c to make sure we return an error if someone
>> uses
>> cma_init_reserved_mem() before the pageblo
amp;xibm->lock){}-{3:3}, at: xive_spapr_put_ipi+0xb8/0x120
> other info that might help us debug this:
> context-{2:2}
> no locks held by swapper/2/0.
> stack backtrace:
> CPU: 2 UID: 0 PID: 0 Comm: swapper/2 Not tainted
> 6.12.0-rc2-fix-invalid-wait-context-00222-g7d2
.c: flags & HT_MSI_FLAGS_ENABLE ? "enabled"
: "disabled", addr);
> Signed-off-by: Thorsten Blum
> ---
> arch/powerpc/kernel/secure_boot.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
For this patch it looks good to me. P
Marco Elver writes:
> On Fri, 18 Oct 2024 at 19:46, Ritesh Harjani (IBM)
> wrote:
>>
>> From: Nirjhar Roy
>>
>> Faults from copy_from_kernel_nofault() needs to be handled by fixup
>> table and should not be handled by kfence. Otherwise whi
"Ritesh Harjani (IBM)" writes:
> cma_init_reserved_mem() checks base and size alignment with
> CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> early boot when pageblock_order is 0. That means if base and size does
> not have pageblock_order a
Sourabh Jain writes:
> Hello Ritesh,
>
>
> On 12/11/24 17:23, Ritesh Harjani (IBM) wrote:
>> Ritesh Harjani (IBM) writes:
>>
>>> Sourabh Jain writes:
>>>
>>>> Hello Ritesh,
>>>>
>>>>
>>>> On 12/11/24 11:
Sourabh Jain writes:
> Hello Ritesh,
>
>
> On 12/11/24 11:51, Ritesh Harjani (IBM) wrote:
>> Sourabh Jain writes:
>>
>>> The param area is a memory region where the kernel places additional
>>> command-line arguments for fadump kernel. Currently, the p
Ritesh Harjani (IBM) writes:
> Sourabh Jain writes:
>
>> Hello Ritesh,
>>
>>
>> On 12/11/24 11:51, Ritesh Harjani (IBM) wrote:
>>> Sourabh Jain writes:
>>>
>>>> The param area is a memory region where the kernel places additional
&
Luming Yu writes:
> On Sun, Sep 22, 2024 at 04:39:53PM +0530, Ritesh Harjani wrote:
>> Luming Yu writes:
>>
>> > From: Yu Luming
>> >
>> > ppc always do its own tracking for batch tlb. By trivially enabling
>> > the ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH in ppc, ppc arch can re-use
>> > common code
Luming Yu writes:
> From: Yu Luming
>
> ppc always do its own tracking for batch tlb. By trivially enabling
> the ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH in ppc, ppc arch can re-use
> common code in rmap and reduce overhead and do optimization it could not
> have without a tlb flushing context at low
Guenter Roeck writes:
> Hi,
>
> On Mon, Sep 09, 2024 at 09:02:20AM -0500, Narayana Murty N wrote:
>> VFIO_EEH_PE_INJECT_ERR ioctl is currently failing on pseries
>> due to missing implementation of err_inject eeh_ops for pseries.
>> This patch implements pseries_eeh_err_inject in eeh_ops/pseries
ects() has to only check if any of the bits is set
hence does fewer operations.
Looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
> Signed-off-by: Costa Shulyupin
> Reviewed-by: Ming Lei
>
> ---
>
> v2: add comparison between cpumask_any_and() a
zhangjiao2 writes:
> From: zhang jiao
>
> Path is not initialized before use,
> remove the unnecessary remove function.
>
> Signed-off-by: zhang jiao
> ---
> tools/testing/selftests/powerpc/mm/tlbie_test.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/tools/testing/selftests/powerpc/
e changed, 21 deletions(-)
I couldn't find any relevant reference of MPC8540_ADS, MPC8560_ADS or
MPC85xx_CDS
after this patch
So please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
>
> diff --git a/arch/powerpc/platforms/85xx/Kconfig
> b/arch/powerpc/platforms/85xx/Kconfi
Narayana Murty N writes:
> Makes pseries_eeh_err_inject() available even when debugfs
> is disabled (CONFIG_DEBUG_FS=n). It moves eeh_debugfs_break_device()
> and eeh_pe_inject_mmio_error() out of the CONFIG_DEBUG_FS block
> and renames it as eeh_break_device().
>
> Reported-by: kernel test robot
Christophe Leroy writes:
> Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit :
>> copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
>> /proc/kcore can have some unmapped kfence objects which when read via
>> copy_from_kernel_nofault() can cau
IG_HTMDUMP)+= htmdump.o
> obj-$(CONFIG_IO_EVENT_IRQ) += io_event_irq.o
> obj-$(CONFIG_LPARCFG)+= lparcfg.o
> obj-$(CONFIG_IBMVIO) += vio.o
> diff --git a/arch/powerpc/platforms/pseries/htmdump.c
> b/arch/powerpc/platforms/ps
es changed, 5 insertions(+), 5 deletions(-)
Not an expert in kvm area. But the change looks very straight forward to
me. Searching for "kmv" string in arch/powerpc/ after applying this
patch indeed resulted in zero hits.
Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
Christophe Leroy writes:
> Le 19/09/2024 à 04:56, Ritesh Harjani (IBM) a écrit :
>> Kfence on book3s Hash on pseries is anyways broken. It fails to boot
>> due to RMA size limitation. That is because, kfence with Hash uses
>> debug_pagealloc infrastructure. debug_pagealloc
Vaibhav Jain writes:
> Hi Ritesh,
>
> Thanks for looking into this patch. My responses your review inline
> below:
>
> Ritesh Harjani (IBM) writes:
>
>> Narayana Murty N writes:
>>
>>> Makes pseries_eeh_err_inject() available even when debugfs
>
ave a kunit test for the
same to make sure all architectures handles this properly.
Thoughts?
[1]: https://lore.kernel.org/all/20230213183858.1473681-1-...@linux.ibm.com/
-ritesh
"Ritesh Harjani (IBM)" writes:
> From: Nirjhar Roy
>
> Faults from copy_from_kernel_nofault(
/run/ext4
# 4k kernel
du -sh /run/ext4
84K /run/ext4
>
> It seems fraught to rely on the ext4.img taking less space on disk than
> the allocated size, so instead create the tmpfs with a size of 2MB. With
> that all 21 tests pass on 64K PAGE_SIZE kernels.
That looks like the right th
mod
> fuse loop nfnetlink xfs sd_mod nvme nvme_core ibmvscsi scsi_transport_srp
> nvme_auth [last unloaded: scsi_debug]
> [16631.058617] CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Kdump: loaded Tainted: G
> W 6.12.0-rc6+ #1
> [16631.058623] Tainted: [W]=WARN
> [16631.05862
gt; CC: Hari Bathini
> CC: Madhavan Srinivasan
> Cc: Mahesh Salgaonkar
> Cc: Michael Ellerman
> CC: Ritesh Harjani (IBM)
> Signed-off-by: Sourabh Jain
> ---
>
> Note: Even with this fix included, it is possible to enable gigantic
> pages in the fadump kernel. IIUC
Ritesh Harjani (IBM) writes:
> Sourabh Jain writes:
>
>> Commit 8597538712eb ("powerpc/fadump: Do not use hugepages when fadump
>> is active") disabled hugetlb support when fadump is active by returning
>> early from hugetlbpage_init():arch/powerpc/mm/h
y.c | 14 +-
> 1 file changed, 1 insertion(+), 13 deletions(-)
>
Similar to previous patch. Cleanup looks good to me.
Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
Madhavan Srinivasan writes:
> Both core-pkey.c and ptrace-pkey.c tests have similar macro
> definitions, move them to "pkeys.h" and remove the macro
> definitions from the C file.
>
> Signed-off-by: Madhavan Srinivasan
> ---
> tools/testing/selftests/powerpc/include/pkeys.h | 8
>
ional macros pointed out by Ritesh
>which are duplicates and are avilable in "pkeys.h"
Thanks! The changes looks good to me.
Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
Gave a quick run on my lpar too -
# selftests: powerpc/ptrace: core-pkey
# test:
Vaibhav Jain writes:
> Hi Ritesh,
>
> Thanks for looking into this patch. My responses on behalf of Narayana
> below:
>
> "Ritesh Harjani (IBM)" writes:
>
>> Narayana Murty N writes:
>>
>>> The PE Reset State "0" obtained from RT
uot;off");
> + str_on_off(KERNEL_COHERENCY),
> + str_on_off(devtree_coherency));
> BUG();
> }
Looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
Christophe Leroy writes:
> Rewrite __real_pte() as a static inline in order to avoid
> following warning/error when building with 4k page size:
>
> CC arch/powerpc/mm/book3s64/hash_tlb.o
> arch/powerpc/mm/book3s64/hash_tlb.c: In function 'hpte_need_flush':
> arch/powerpc/
lot information at the
> right offset for hugetlb")
> Signed-off-by: Christophe Leroy
> ---
> v2: Also inline __rpte_to_hidx() for the same reason
Thanks for addressing the other warning too in v2. I also tested the
changes on my system and this fixes both the reported warnings.
ring_choices.h i.e.
include/linux/seq_file.h -> linux/string_helpers.h ->
linux/string_choices.h
Directly having string_choices include could be better.
#include
However no hard preferences. The patch functionally looks correct to me.
Please feel free to add -
Reviewed
Christophe Leroy writes:
> Le 07/04/2025 à 21:10, Ritesh Harjani (IBM) a écrit :
>> Madhavan Srinivasan writes:
>>
>>> Commit 3d45a3d0d2e6 ("powerpc: Define config option for processors with
>>> broadcast TLBIE")
>>
>> We may need to add
Stefan Berger writes:
> I bisected Linux between 6.13.0 and 6.12.0 due to failing kexec on a
> Power8 baremetal host on 6.13.0:
>
> 8fec58f503b296af87ffca3898965e3054f2b616 is the first bad commit
> commit 8fec58f503b296af87ffca3898965e3054f2b616
> Author: Ritesh Harjani (I
but by
> cumulative
> operations during the test sequence.
>
>
> Environment Details:
> Kernel: 6.15.0-rc1-g521d54901f98
> Reproducible with: 6.15.0-rc2-gf3a2e2a79c9d
Looks like the issue is happening on 6.15-rc2. Did git bisect revealed a
faulty commit?
>
Dan Horák writes:
> Hi,
>
> after updating to Fedora built 6.15-rc2 kernel from 6.14 I am getting a
> soft lockup early in the boot and NVME related timeout/crash later
> (could it be related?). I am first checking if this is a known issue
> as I have not started bisecting yet.
>
> [2.866399]
reate mappings for vmemmap area. In this, we first try
to allocate pmd entry using vmemmap_alloc_block_buf() of PMD_SIZE. If we
couldn't allocate, we should definitely fallback to base page mapping.
Looks good to me. Feel free to add:
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
> Signed-off-by
ore calling vfree().
nitpick: I might have re-pharsed the commit msg as:
powerpc/pseries/iommu: Fix kmemleak in TCE table userspace view
The patch looks good to me purely from the kmemleak bug perspective.
So feel free to take:
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
0.00] HugeTLB: hugepages=1 does not follow a valid hugepagesz,
> ignoring
> [0.706375] HugeTLB support is disabled!
> [0.773530] hugetlbfs: disabling because there are no supported hugepage
> sizes
>
> $ cat /proc/meminfo | grep -i "hugetlb"
> -
Sourabh Jain writes:
> Hello Ritesh,
>
> Thanks for the review.
>
> On 02/03/25 12:05, Ritesh Harjani (IBM) wrote:
>> Sourabh Jain writes:
>>
>>> The fadump kernel boots with limited memory solely to collect the kernel
>>> core dump. Having giganti
Christophe Leroy writes:
> Le 10/03/2025 à 13:44, Donet Tom a écrit :
>> From: "Ritesh Harjani (IBM)"
>>
>> Fix compile errors when CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP=n
>
> I don't understand your patch.
>
> As far as I can see, CONFIG_AR
Erhard Furtner writes:
> Greetings!
>
> At boot with a KASAN-enabled v6.14-rc4 kernel on my PowerMac G4 DP I get:
>
> [...]
> vmalloc_node_range for size 4198400 failed: Address range restricted to
> 0xf100 - 0xf511
> swapon: vmalloc error: size 4194304, vm_struct allocation failed,
> m
Sourabh Jain writes:
> Hello Ritesh,
>
>
> On 04/03/25 10:27, Ritesh Harjani (IBM) wrote:
>> Sourabh Jain writes:
>>
>>> Hello Ritesh,
>>>
>>> Thanks for the review.
>>>
>>> On 02/03/25 12:05, Ritesh Harjani (IBM) wrote:
>&
o me. Please feel free to add:
Reviewed-by: Ritesh Harjani (IBM)
> Signed-off-by: Gautam Menghani
> ---
> arch/powerpc/kvm/trace_book3s.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/powerpc/kvm/trace_book3s.h b/arch/powerpc/kvm/trace_book3s.h
> index 372
>From ec1a16a15a86c6224cc0129ab3c2ae9f69f2c7c5 Mon Sep 17 00:00:00 2001
From: Rohan McLure
Date: Mon, 28 Feb 2022 10:19:19 +1100
Subject: [PATCH] powerpc: declare unmodified attribute_group usages
const
To: linuxppc-dev@lists.ozlabs.org
Inspired by (bd75b4ef4977: Constify static attribute_group s
-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/ptdump/hashpagetable.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/ptdump/hashpagetable.c
b/arch/powerpc/mm/ptdump/hashpagetable.c
index 9a601587836b..a6baa6166d94 100644
--- a/arch/powerpc/mm/ptdump/hashpa
Hello Nayna,
On 22/11/09 03:10PM, Nayna wrote:
>
> On 11/9/22 08:46, Greg Kroah-Hartman wrote:
> > On Sun, Nov 06, 2022 at 04:07:42PM -0500, Nayna Jain wrote:
> > > securityfs is meant for Linux security subsystems to expose policies/logs
> > > or any other information. However, there are variou
cases, which makes it not
really suitable to be enabled by default on production kernels on Hash.
This is because on P8 book3s64, we don't support mapping multiple pagesizes
(MPSS) within the kernel linear map segment. Is this understanding correct?
Ritesh Harjani (IBM) (10):
book3
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 2 ++
arch/powerpc/mm/book3s64/radix_pgtable.c | 12
arch/power
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index c66b9921fc7d..759dbcbf1
Use pr_warn_once instead of WARN_ON_ONCE as discussed here [1]
for printing possible use of set_memory_* on linear map on Hash.
[1]: https://lore.kernel.org/all/877cc2fpi2.fsf@mail.lhotse/#t
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/pageattr.c | 5 -
1 file changed, 4
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 8 ++--
arch/powerpc/mm/book3s64/pgtable.c | 13 +
decide
linear map pagesize if hash supports either debug_pagealloc or
kfence.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 53e6f3a524eb..b6da25719e37 10
d a kunit testcase patch-1.
2. Fixed a false negative with copy_from_kernel_nofault() in patch-2.
3. Addressed review comments from Christophe Leroy.
4. Added patch-13.
Nirjhar Roy (1):
mm/kfence: Add a new kunit test test_use_after_free_read_nofault()
Ritesh Harjani (IBM) (12):
powerpc:
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
1 - 100 of 168 matches
Mail list logo