Re: (subset) [PATCH v3 00/16] timers: Cleanup delay/sleep related mess

2024-10-18 Thread Mark Brown
On Fri, Oct 18, 2024 at 10:06:33AM +0200, Anna-Maria Behnsen wrote: > Would it be ok for you, if the patch is routed through tip tree? kernel > test robot triggers a warning for htmldoc that there is a reference to > the no longer existing file 'timer-howto.rst': > https://lore.kernel.org/r/202

Re: (subset) [PATCH v3 00/16] timers: Cleanup delay/sleep related mess

2024-10-18 Thread Mark Brown
On Fri, Oct 18, 2024 at 10:06:33AM +0200, Anna-Maria Behnsen wrote: > Would it be ok for you, if the patch is routed through tip tree? kernel > test robot triggers a warning for htmldoc that there is a reference to > the no longer existing file 'timer-howto.rst': > https://lore.kernel.org/r/202

Re: [GIT PULL] Please pull powerpc/linux.git powerpc-6.12-5 tag

2024-10-18 Thread pr-tracker-bot
The pull request you sent on Fri, 18 Oct 2024 13:10:13 +0530: > https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git > tags/powerpc-6.12-5 has been merged into torvalds/linux.git: https://git.kernel.org/torvalds/c/ef444a0aba6d128e5ecd1c8df0f989c356f76b5d Thank you! -- Deet-doot-d

[PATCH v6 09/17] powerpc64/bpf: Fold bpf_jit_emit_func_call_hlp() into bpf_jit_emit_func_call_rel()

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Commit 61688a82e047 ("powerpc/bpf: enable kfunc call") enhanced bpf_jit_emit_func_call_hlp() to handle calls out to module region, where bpf progs are generated. The only difference now between bpf_jit_emit_func_call_hlp() and bpf_jit_emit_func_call_rel() is in handling of the

[PATCH v6 05/17] powerpc/module_64: Convert #ifdef to IS_ENABLED()

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Minor refactor for converting #ifdef to IS_ENABLED(). Reviewed-by: Nicholas Piggin Signed-off-by: Naveen N Rao --- arch/powerpc/kernel/module_64.c | 10 ++ 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/k

[PATCH v6 08/17] powerpc/ftrace: Move ftrace stub used for init text before _einittext

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Move the ftrace stub used to cover inittext before _einittext so that it is within kernel text, as seen through core_kernel_text(). This is required for a subsequent change to ftrace. Signed-off-by: Naveen N Rao --- arch/powerpc/kernel/vmlinux.lds.S | 3 +-- 1 file changed,

[PATCH v6 02/17] powerpc/kprobes: Use ftrace to determine if a probe is at function entry

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Rather than hard-coding the offset into a function to be used to determine if a kprobe is at function entry, use ftrace_location() to determine the ftrace location within the function and categorize all instructions till that offset to be function entry. For functions that can

[PATCH v6 00/17] powerpc: Core ftrace rework, support for ftrace direct and bpf trampolines

2024-10-18 Thread Hari Bathini
This is v6 of the series posted here: https://lore.kernel.org/all/20240915205648.830121-1-hbath...@linux.ibm.com/ This series reworks core ftrace support on powerpc to have the function profiling sequence moved out of line. This enables us to have a single nop at kernel function entry virtually el

[PATCH v6 04/17] powerpc32/ftrace: Unify 32-bit and 64-bit ftrace entry code

2024-10-18 Thread Hari Bathini
From: Naveen N Rao On 32-bit powerpc, gcc generates a three instruction sequence for function profiling: mflrr0 stw r0, 4(r1) bl _mcount On kernel boot, the call to _mcount() is nop-ed out, to be patched back in when ftrace is actually enabled. The 'stw' inst

[PATCH v6 03/17] powerpc64/ftrace: Nop out additional 'std' instruction emitted by gcc v5.x

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Gcc v5.x emits a 3-instruction sequence for -mprofile-kernel: mflrr0 std r0, 16(r1) bl _mcount Gcc v6.x moved to a simpler 2-instruction sequence by removing the 'std' instruction. The store saved the return address in the LR save area in t

[PATCH v6 01/17] powerpc/trace: Account for -fpatchable-function-entry support by toolchain

2024-10-18 Thread Hari Bathini
From: Naveen N Rao So far, we have relied on the fact that gcc supports both -mprofile-kernel, as well as -fpatchable-function-entry, and clang supports neither. Our Makefile only checks for CONFIG_MPROFILE_KERNEL to decide which files to build. Clang has a feature request out [*] to implement -f

[PATCH v6 14/17] powerpc/ftrace: Add support for DYNAMIC_FTRACE_WITH_CALL_OPS

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Implement support for DYNAMIC_FTRACE_WITH_CALL_OPS similar to the arm64 implementation. This works by patching-in a pointer to an associated ftrace_ops structure before each traceable function. If multiple ftrace_ops are associated with a call site, then a special ftrace_list_

[PATCH v3 12/12] book3s64/hash: Early detect debug_pagealloc size requirement

2024-10-18 Thread Ritesh Harjani (IBM)
Add hash_supports_debug_pagealloc() helper to detect whether debug_pagealloc can be supported on hash or not. This checks for both, whether debug_pagealloc config is enabled and the linear map should fit within rma_size/4 region size. This can then be used early during htab_init_page_sizes() to de

[PATCH v6 12/17] powerpc64/ftrace: Move ftrace sequence out of line

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Function profile sequence on powerpc includes two instructions at the beginning of each function: mflrr0 bl ftrace_caller The call to ftrace_caller() gets nop'ed out during kernel boot and is patched in when ftrace is enabled. Given the sequence, we c

[PATCH v6 16/17] samples/ftrace: Add support for ftrace direct samples on powerpc

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Add powerpc 32-bit and 64-bit samples for ftrace direct. This serves to show the sample instruction sequence to be used by ftrace direct calls to adhere to the ftrace ABI. On 64-bit powerpc, TOC setup requires some additional work. Signed-off-by: Naveen N Rao --- arch/power

[PATCH v6 06/17] powerpc/ftrace: Remove pointer to struct module from dyn_arch_ftrace

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Pointer to struct module is only relevant for ftrace records belonging to kernel modules. Having this field in dyn_arch_ftrace wastes memory for all ftrace records belonging to the kernel. Remove the same in favour of looking up the module from the ftrace record address, simila

[PATCH v6 11/17] kbuild: Add generic hook for architectures to use before the final vmlinux link

2024-10-18 Thread Hari Bathini
From: Naveen N Rao On powerpc, we would like to be able to make a pass on vmlinux.o and generate a new object file to be linked into vmlinux. Add a generic pass in Makefile.vmlinux that architectures can use for this purpose. Architectures need to select CONFIG_ARCH_WANTS_PRE_LINK_VMLINUX and mu

[PATCH v6 15/17] powerpc/ftrace: Add support for DYNAMIC_FTRACE_WITH_DIRECT_CALLS

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Add support for DYNAMIC_FTRACE_WITH_DIRECT_CALLS similar to the arm64 implementation. ftrace direct calls allow custom trampolines to be called into directly from function ftrace call sites, bypassing the ftrace trampoline completely. This functionality is currently utilized b

[PATCH v6 13/17] powerpc64/ftrace: Support .text larger than 32MB with out-of-line stubs

2024-10-18 Thread Hari Bathini
From: Naveen N Rao We are restricted to a .text size of ~32MB when using out-of-line function profile sequence. Allow this to be extended up to the previous limit of ~64MB by reserving space in the middle of .text. A new config option CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE is introduced to sp

[PATCH v6 10/17] powerpc/ftrace: Add a postlink script to validate function tracer

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Function tracer on powerpc can only work with vmlinux having a .text size of up to ~64MB due to powerpc branch instruction having a limited relative branch range of 32MB. Today, this is only detected on kernel boot when ftrace is init'ed. Add a post-link script to check the siz

[PATCH v6 17/17] powerpc64/bpf: Add support for bpf trampolines

2024-10-18 Thread Hari Bathini
From: Naveen N Rao Add support for bpf_arch_text_poke() and arch_prepare_bpf_trampoline() for 64-bit powerpc. While the code is generic, BPF trampolines are only enabled on 64-bit powerpc. 32-bit powerpc will need testing and some updates. BPF Trampolines adhere to the existing ftrace ABI utiliz

[PATCH v4 3/3] powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init()

2024-10-18 Thread Ritesh Harjani (IBM)
During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_in

[PATCH v6 07/17] powerpc/ftrace: Skip instruction patching if the instructions are the same

2024-10-18 Thread Hari Bathini
From: Naveen N Rao To simplify upcoming changes to ftrace, add a check to skip actual instruction patching if the old and new instructions are the same. We still validate that the instruction is what we expect, but don't actually patch the same instruction again. Signed-off-by: Naveen N Rao ---

Re: [PATCH v3 01/12] powerpc: mm/fault: Fix kfence page fault reporting

2024-10-18 Thread Christophe Leroy
Le 18/10/2024 à 19:29, Ritesh Harjani (IBM) a écrit : copy_from_kernel_nofault() can be called when doing read of /proc/kcore. /proc/kcore can have some unmapped kfence objects which when read via copy_from_kernel_nofault() can cause page faults. Since *_nofault() functions define their own fi

[PATCH v3] mm/kfence: Add a new kunit test test_use_after_free_read_nofault()

2024-10-18 Thread Ritesh Harjani (IBM)
From: Nirjhar Roy Faults from copy_from_kernel_nofault() needs to be handled by fixup table and should not be handled by kfence. Otherwise while reading /proc/kcore which uses copy_from_kernel_nofault(), kfence can generate false negatives. This can happen when /proc/kcore ends up reading an unma

Re: [PATCH][next] powerpc/spufs: Replace snprintf() with the safer scnprintf() variant

2024-10-18 Thread Segher Boessenkool
Hi! On Sat, Oct 19, 2024 at 12:50:43PM +1300, Paulo Miguel Almeida wrote: > On Fri, Oct 18, 2024 at 10:38:43AM -0500, Segher Boessenkool wrote: > > On Fri, Oct 18, 2024 at 09:28:19PM +1300, Paulo Miguel Almeida wrote: > > > The C99 standard specifies that {v}snprintf() returns the length of the >

Re: [PATCH v3] mm/kfence: Add a new kunit test test_use_after_free_read_nofault()

2024-10-18 Thread Marco Elver
On Fri, 18 Oct 2024 at 19:46, Ritesh Harjani (IBM) wrote: > > From: Nirjhar Roy > > Faults from copy_from_kernel_nofault() needs to be handled by fixup > table and should not be handled by kfence. Otherwise while reading > /proc/kcore which uses copy_from_kernel_nofault(), kfence can generate > f

Re: [PATCH] ASoC: fsl_micfil: Add a flag to distinguish with different volume control types

2024-10-18 Thread Mark Brown
On Thu, 17 Oct 2024 16:15:07 +0900, Chancel Liu wrote: > On i.MX8MM the register of volume control has positive and negative > values. It is different from other platforms like i.MX8MP and i.MX93 > which only have positive values. Add a volume_sx flag to use SX_TLV > volume control for this kind of

Re: [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init

2024-10-18 Thread Madhavan Srinivasan
On 10/14/24 4:54 PM, Ritesh Harjani (IBM) wrote: > Madhavan Srinivasan writes: > >> On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote: >>> We anyway don't use any return values from fadump_cma_init(). Since >>> fadump_reserve_mem() from where fadump_cma_init() gets called today, >>> already has

Re: [PATCH v2 3/6] x86/uaccess: Rearrange putuser.S

2024-10-18 Thread Josh Poimboeuf
On Fri, Oct 18, 2024 at 11:51:06AM +0300, Kirill A . Shutemov wrote: > On Thu, Oct 17, 2024 at 02:55:22PM -0700, Josh Poimboeuf wrote: > > SYM_FUNC_START(__put_user_2) > > check_range size=2 > > ASM_STAC > > -3: movw %ax,(%_ASM_CX) > > +2: movw %ax,(%_ASM_CX) > > xor %ecx,%ecx > >

Re: [PATCH][next] powerpc/spufs: Replace snprintf() with the safer scnprintf() variant

2024-10-18 Thread Segher Boessenkool
On Fri, Oct 18, 2024 at 09:28:19PM +1300, Paulo Miguel Almeida wrote: > The C99 standard specifies that {v}snprintf() returns the length of the > data that *would have been* written if there were enough space. Not including the trailing zero byte, and it can also return negative if there was an en

[PATCH v3 02/12] book3s64/hash: Remove kfence support temporarily

2024-10-18 Thread Ritesh Harjani (IBM)
Kfence on book3s Hash on pseries is anyways broken. It fails to boot due to RMA size limitation. That is because, kfence with Hash uses debug_pagealloc infrastructure. debug_pagealloc allocates linear map for entire dram size instead of just kfence relevant objects. This means for 16TB of DRAM it w

[PATCH v3 00/12] powerpc/kfence: Improve kfence support (mainly Hash)

2024-10-18 Thread Ritesh Harjani (IBM)
v2 -> v3: 1. Addressed review comments from Christophe in patch-1: To check for is_kfence_address before doing search in exception tables. (Thanks for the review!) 2. Separate out patch-1, which will need a separate tree for inclusion and review from kfence/kasan folks since

[PATCH v3 03/12] book3s64/hash: Refactor kernel linear map related calls

2024-10-18 Thread Ritesh Harjani (IBM)
This just brings all linear map related handling at one place instead of having those functions scattered in hash_utils file. Makes it easy for review. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 164 +--

[PATCH v3 09/12] book3s64/hash: Add kfence functionality

2024-10-18 Thread Ritesh Harjani (IBM)
Now that linear map functionality of debug_pagealloc is made generic, enable kfence to use this generic infrastructure. 1. Define kfence related linear map variables. - u8 *linear_map_kf_hash_slots; - unsigned long linear_map_kf_hash_count; - DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock);

[PATCH v3 04/12] book3s64/hash: Add hash_debug_pagealloc_add_slot() function

2024-10-18 Thread Ritesh Harjani (IBM)
This adds hash_debug_pagealloc_add_slot() function instead of open coding that in htab_bolt_mapping(). This is required since we will be separating kfence functionality to not depend upon debug_pagealloc. No functionality change in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerp

[PATCH v3 05/12] book3s64/hash: Add hash_debug_pagealloc_alloc_slots() function

2024-10-18 Thread Ritesh Harjani (IBM)
This adds hash_debug_pagealloc_alloc_slots() function instead of open coding that in htab_initialize(). This is required since we will be separating the kfence functionality to not depend upon debug_pagealloc. Now that everything required for debug_pagealloc is under a #ifdef config. Bring in line

[PATCH v3 08/12] book3s64/hash: Disable debug_pagealloc if it requires more memory

2024-10-18 Thread Ritesh Harjani (IBM)
Make size of the linear map to be allocated in RMA region to be of ppc64_rma_size / 4. If debug_pagealloc requires more memory than that then do not allocate any memory and disable debug_pagealloc. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 15 ++-

[PATCH v3 06/12] book3s64/hash: Refactor hash__kernel_map_pages() function

2024-10-18 Thread Ritesh Harjani (IBM)
This refactors hash__kernel_map_pages() function to call hash_debug_pagealloc_map_pages(). This will come useful when we will add kfence support. No functionality changes in this patch. Signed-off-by: Ritesh Harjani (IBM) --- arch/powerpc/mm/book3s64/hash_utils.c | 9 - 1 file changed,

[PATCH v3 07/12] book3s64/hash: Make kernel_map_linear_page() generic

2024-10-18 Thread Ritesh Harjani (IBM)
Currently kernel_map_linear_page() function assumes to be working on linear_map_hash_slots array. But since in later patches we need a separate linear map array for kfence, hence make kernel_map_linear_page() take a linear map array and lock in it's function argument. This is needed to separate ou

[PATCH v3 11/12] book3s64/hash: Disable kfence if not early init

2024-10-18 Thread Ritesh Harjani (IBM)
Enable kfence on book3s64 hash only when early init is enabled. This is because, kfence could cause the kernel linear map to be mapped at PAGE_SIZE level instead of 16M (which I guess we don't want). Also currently there is no way to - 1. Make multiple page size entries for the SLB used for kernel

[PATCH v3 10/12] book3s64/radix: Refactoring common kfence related functions

2024-10-18 Thread Ritesh Harjani (IBM)
Both radix and hash on book3s requires to detect if kfence early init is enabled or not. Hash needs to disable kfence if early init is not enabled because with kfence the linear map is mapped using PAGE_SIZE rather than 16M mapping. We don't support multiple page sizes for slb entry used for kernel

[PATCH][next] powerpc/ps3: replace open-coded sysfs_emit function

2024-10-18 Thread Paulo Miguel Almeida
sysfs_emit() helper function should be used when formatting the value to be returned to user space. This patch replaces open-coded sysfs_emit() in sysfs .show() callbacks Link: https://github.com/KSPP/linux/issues/105 Signed-off-by: Paulo Miguel Almeida --- arch/powerpc/platforms/ps3/system-bus

Re: [PATCH][next] powerpc/spufs: Replace snprintf() with the safer scnprintf() variant

2024-10-18 Thread Paulo Miguel Almeida
On Fri, Oct 18, 2024 at 10:38:43AM -0500, Segher Boessenkool wrote: > On Fri, Oct 18, 2024 at 09:28:19PM +1300, Paulo Miguel Almeida wrote: > > The C99 standard specifies that {v}snprintf() returns the length of the > > data that *would have been* written if there were enough space. > > Not includ

nx_crypto on power8 lpar

2024-10-18 Thread Anatoly Pugachev
Hello! Is it possible to somehow debug crypto-nx errors and follow-up in cryptomgr_test ? System info is debian sid , running in LPAR on IBM S822 machine. # uname -a Linux redpanda 6.12.0-rc3 #119 SMP Thu Oct 17 23:47:18 MSK 2024 ppc64 GNU/Linux # lscpu Architecture: ppc64 CPU op-

[PATCH v3 01/12] powerpc: mm/fault: Fix kfence page fault reporting

2024-10-18 Thread Ritesh Harjani (IBM)
copy_from_kernel_nofault() can be called when doing read of /proc/kcore. /proc/kcore can have some unmapped kfence objects which when read via copy_from_kernel_nofault() can cause page faults. Since *_nofault() functions define their own fixup table for handling fault, use that instead of asking kf

Re: [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init

2024-10-18 Thread IBM
Madhavan Srinivasan writes: > > Patchset looks fine to me. > > Reviewed-by: Madhavan Srinivasan for the series. > Thanks Maddy for the reviews! I will spin PATCH v4 with these minor suggested changes (No code changes) -ritesh

[PATCH v4 1/3] powerpc/fadump: Refactor and prepare fadump_cma_init for late init

2024-10-18 Thread Ritesh Harjani (IBM)
We anyway don't use any return values from fadump_cma_init(). Since fadump_reserve_mem() from where fadump_cma_init() gets called today, already has the required checks. This patch makes this function return type as void. Let's also handle extra cases like return if fadump_supported is false or dum

[PATCH v4 2/3] powerpc/fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem

2024-10-18 Thread Ritesh Harjani (IBM)
This patch refactors all CMA related initialization and alignment code to within fadump_cma_init() which gets called in the end. This also means that we keep [reserve_dump_area_start, boot_memory_size] page aligned during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the aligned

Re: [PATCH V2 1/2] tools/perf/pmu-events/powerpc: Add support for compat events in json

2024-10-18 Thread Namhyung Kim
On Thu, 10 Oct 2024 20:21:06 +0530, Athira Rajeev wrote: > perf list picks the events supported for specific platform > from pmu-events/arch/powerpc/. Example power10 events > are in pmu-events/arch/powerpc/power10, power9 events are part > of pmu-events/arch/powerpc/power9. The decision of which

[GIT PULL] Please pull powerpc/linux.git powerpc-6.12-5 tag

2024-10-18 Thread Madhavan Srinivasan
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi Linus, Please pull my first pullrequest for powerpc tree. My gpg key is available in pgpkeys.git and it is signed by Michael Ellerman and others. https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/commit/?id=5931604633197aa5cdbf6c4c9de0f

Re: [PATCH v2 3/6] x86/uaccess: Rearrange putuser.S

2024-10-18 Thread Kirill A . Shutemov
On Thu, Oct 17, 2024 at 02:55:22PM -0700, Josh Poimboeuf wrote: > SYM_FUNC_START(__put_user_2) > check_range size=2 > ASM_STAC > -3: movw %ax,(%_ASM_CX) > +2: movw %ax,(%_ASM_CX) > xor %ecx,%ecx > ASM_CLAC > RET > SYM_FUNC_END(__put_user_2) > EXPORT_SYMBOL(__put

Re: (subset) [PATCH v3 00/16] timers: Cleanup delay/sleep related mess

2024-10-18 Thread Anna-Maria Behnsen
Hi Mark, Mark Brown writes: > On Mon, 14 Oct 2024 10:22:17 +0200, Anna-Maria Behnsen wrote: >> a question about which sleeping function should be used in acpi_os_sleep() >> started a discussion and examination about the existing documentation and >> implementation of functions which insert a sle

[PATCH] pmu_battery: Set power supply type to BATTERY

2024-10-18 Thread Ed Robbins
If the power supply type is not set it defaults to "Unknown" and upower does not recognise it. In turn battery monitor applications do not see a battery. Setting to POWER_SUPPLY_TYPE_BATTERY fixes this. Signed-off-by: Ed Robbins --- drivers/power/supply/pmu_battery.c | 1 + 1 file changed, 1 inse

[PATCH][next] powerpc/spufs: Replace snprintf() with the safer scnprintf() variant

2024-10-18 Thread Paulo Miguel Almeida
The C99 standard specifies that {v}snprintf() returns the length of the data that *would have been* written if there were enough space. In some cases, this misunderstanding led to buffer-overruns in the past. It's generally considered better/safer to use the {v}scnprintf() variants in their place.