On Fri, Oct 18, 2024 at 10:06:33AM +0200, Anna-Maria Behnsen wrote:
> Would it be ok for you, if the patch is routed through tip tree? kernel
> test robot triggers a warning for htmldoc that there is a reference to
> the no longer existing file 'timer-howto.rst':
> https://lore.kernel.org/r/202
On Fri, Oct 18, 2024 at 10:06:33AM +0200, Anna-Maria Behnsen wrote:
> Would it be ok for you, if the patch is routed through tip tree? kernel
> test robot triggers a warning for htmldoc that there is a reference to
> the no longer existing file 'timer-howto.rst':
> https://lore.kernel.org/r/202
The pull request you sent on Fri, 18 Oct 2024 13:10:13 +0530:
> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
> tags/powerpc-6.12-5
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/ef444a0aba6d128e5ecd1c8df0f989c356f76b5d
Thank you!
--
Deet-doot-d
From: Naveen N Rao
Commit 61688a82e047 ("powerpc/bpf: enable kfunc call") enhanced
bpf_jit_emit_func_call_hlp() to handle calls out to module region, where
bpf progs are generated. The only difference now between
bpf_jit_emit_func_call_hlp() and bpf_jit_emit_func_call_rel() is in
handling of the
From: Naveen N Rao
Minor refactor for converting #ifdef to IS_ENABLED().
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/module_64.c | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/k
From: Naveen N Rao
Move the ftrace stub used to cover inittext before _einittext so that it
is within kernel text, as seen through core_kernel_text(). This is
required for a subsequent change to ftrace.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/vmlinux.lds.S | 3 +--
1 file changed,
From: Naveen N Rao
Rather than hard-coding the offset into a function to be used to
determine if a kprobe is at function entry, use ftrace_location() to
determine the ftrace location within the function and categorize all
instructions till that offset to be function entry.
For functions that can
This is v6 of the series posted here:
https://lore.kernel.org/all/20240915205648.830121-1-hbath...@linux.ibm.com/
This series reworks core ftrace support on powerpc to have the function
profiling sequence moved out of line. This enables us to have a single
nop at kernel function entry virtually el
From: Naveen N Rao
On 32-bit powerpc, gcc generates a three instruction sequence for
function profiling:
mflrr0
stw r0, 4(r1)
bl _mcount
On kernel boot, the call to _mcount() is nop-ed out, to be patched back
in when ftrace is actually enabled. The 'stw' inst
From: Naveen N Rao
Gcc v5.x emits a 3-instruction sequence for -mprofile-kernel:
mflrr0
std r0, 16(r1)
bl _mcount
Gcc v6.x moved to a simpler 2-instruction sequence by removing the 'std'
instruction. The store saved the return address in the LR save area in
t
From: Naveen N Rao
So far, we have relied on the fact that gcc supports both
-mprofile-kernel, as well as -fpatchable-function-entry, and clang
supports neither. Our Makefile only checks for CONFIG_MPROFILE_KERNEL to
decide which files to build. Clang has a feature request out [*] to
implement -f
From: Naveen N Rao
Implement support for DYNAMIC_FTRACE_WITH_CALL_OPS similar to the
arm64 implementation.
This works by patching-in a pointer to an associated ftrace_ops
structure before each traceable function. If multiple ftrace_ops are
associated with a call site, then a special ftrace_list_
Add hash_supports_debug_pagealloc() helper to detect whether
debug_pagealloc can be supported on hash or not. This checks for both,
whether debug_pagealloc config is enabled and the linear map should
fit within rma_size/4 region size.
This can then be used early during htab_init_page_sizes() to de
From: Naveen N Rao
Function profile sequence on powerpc includes two instructions at the
beginning of each function:
mflrr0
bl ftrace_caller
The call to ftrace_caller() gets nop'ed out during kernel boot and is
patched in when ftrace is enabled.
Given the sequence, we c
From: Naveen N Rao
Add powerpc 32-bit and 64-bit samples for ftrace direct. This serves to
show the sample instruction sequence to be used by ftrace direct calls
to adhere to the ftrace ABI.
On 64-bit powerpc, TOC setup requires some additional work.
Signed-off-by: Naveen N Rao
---
arch/power
From: Naveen N Rao
Pointer to struct module is only relevant for ftrace records belonging
to kernel modules. Having this field in dyn_arch_ftrace wastes memory
for all ftrace records belonging to the kernel. Remove the same in
favour of looking up the module from the ftrace record address, simila
From: Naveen N Rao
On powerpc, we would like to be able to make a pass on vmlinux.o and
generate a new object file to be linked into vmlinux. Add a generic pass
in Makefile.vmlinux that architectures can use for this purpose.
Architectures need to select CONFIG_ARCH_WANTS_PRE_LINK_VMLINUX and mu
From: Naveen N Rao
Add support for DYNAMIC_FTRACE_WITH_DIRECT_CALLS similar to the arm64
implementation.
ftrace direct calls allow custom trampolines to be called into directly
from function ftrace call sites, bypassing the ftrace trampoline
completely. This functionality is currently utilized b
From: Naveen N Rao
We are restricted to a .text size of ~32MB when using out-of-line
function profile sequence. Allow this to be extended up to the previous
limit of ~64MB by reserving space in the middle of .text.
A new config option CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE is
introduced to sp
From: Naveen N Rao
Function tracer on powerpc can only work with vmlinux having a .text
size of up to ~64MB due to powerpc branch instruction having a limited
relative branch range of 32MB. Today, this is only detected on kernel
boot when ftrace is init'ed. Add a post-link script to check the siz
From: Naveen N Rao
Add support for bpf_arch_text_poke() and arch_prepare_bpf_trampoline()
for 64-bit powerpc. While the code is generic, BPF trampolines are only
enabled on 64-bit powerpc. 32-bit powerpc will need testing and some
updates.
BPF Trampolines adhere to the existing ftrace ABI utiliz
During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
since pageblock_order is still zero and it gets initialized
later during initmem_init() e.g.
setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order()
One such use case where this causes issue is -
early_setup() -> early_in
From: Naveen N Rao
To simplify upcoming changes to ftrace, add a check to skip actual
instruction patching if the old and new instructions are the same. We
still validate that the instruction is what we expect, but don't
actually patch the same instruction again.
Signed-off-by: Naveen N Rao
---
Le 18/10/2024 à 19:29, Ritesh Harjani (IBM) a écrit :
copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
/proc/kcore can have some unmapped kfence objects which when read via
copy_from_kernel_nofault() can cause page faults. Since *_nofault()
functions define their own fi
From: Nirjhar Roy
Faults from copy_from_kernel_nofault() needs to be handled by fixup
table and should not be handled by kfence. Otherwise while reading
/proc/kcore which uses copy_from_kernel_nofault(), kfence can generate
false negatives. This can happen when /proc/kcore ends up reading an
unma
Hi!
On Sat, Oct 19, 2024 at 12:50:43PM +1300, Paulo Miguel Almeida wrote:
> On Fri, Oct 18, 2024 at 10:38:43AM -0500, Segher Boessenkool wrote:
> > On Fri, Oct 18, 2024 at 09:28:19PM +1300, Paulo Miguel Almeida wrote:
> > > The C99 standard specifies that {v}snprintf() returns the length of the
>
On Fri, 18 Oct 2024 at 19:46, Ritesh Harjani (IBM)
wrote:
>
> From: Nirjhar Roy
>
> Faults from copy_from_kernel_nofault() needs to be handled by fixup
> table and should not be handled by kfence. Otherwise while reading
> /proc/kcore which uses copy_from_kernel_nofault(), kfence can generate
> f
On Thu, 17 Oct 2024 16:15:07 +0900, Chancel Liu wrote:
> On i.MX8MM the register of volume control has positive and negative
> values. It is different from other platforms like i.MX8MP and i.MX93
> which only have positive values. Add a volume_sx flag to use SX_TLV
> volume control for this kind of
On 10/14/24 4:54 PM, Ritesh Harjani (IBM) wrote:
> Madhavan Srinivasan writes:
>
>> On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
>>> We anyway don't use any return values from fadump_cma_init(). Since
>>> fadump_reserve_mem() from where fadump_cma_init() gets called today,
>>> already has
On Fri, Oct 18, 2024 at 11:51:06AM +0300, Kirill A . Shutemov wrote:
> On Thu, Oct 17, 2024 at 02:55:22PM -0700, Josh Poimboeuf wrote:
> > SYM_FUNC_START(__put_user_2)
> > check_range size=2
> > ASM_STAC
> > -3: movw %ax,(%_ASM_CX)
> > +2: movw %ax,(%_ASM_CX)
> > xor %ecx,%ecx
> >
On Fri, Oct 18, 2024 at 09:28:19PM +1300, Paulo Miguel Almeida wrote:
> The C99 standard specifies that {v}snprintf() returns the length of the
> data that *would have been* written if there were enough space.
Not including the trailing zero byte, and it can also return negative if
there was an en
Kfence on book3s Hash on pseries is anyways broken. It fails to boot
due to RMA size limitation. That is because, kfence with Hash uses
debug_pagealloc infrastructure. debug_pagealloc allocates linear map
for entire dram size instead of just kfence relevant objects.
This means for 16TB of DRAM it w
v2 -> v3:
1. Addressed review comments from Christophe in patch-1: To check for
is_kfence_address before doing search in exception tables.
(Thanks for the review!)
2. Separate out patch-1, which will need a separate tree for inclusion and
review from kfence/kasan folks since
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164 +--
Now that linear map functionality of debug_pagealloc is made generic,
enable kfence to use this generic infrastructure.
1. Define kfence related linear map variables.
- u8 *linear_map_kf_hash_slots;
- unsigned long linear_map_kf_hash_count;
- DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock);
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerp
This adds hash_debug_pagealloc_alloc_slots() function instead of open
coding that in htab_initialize(). This is required since we will be
separating the kfence functionality to not depend upon debug_pagealloc.
Now that everything required for debug_pagealloc is under a #ifdef
config. Bring in line
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15 ++-
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed,
Currently kernel_map_linear_page() function assumes to be working on
linear_map_hash_slots array. But since in later patches we need a
separate linear map array for kfence, hence make
kernel_map_linear_page() take a linear map array and lock in it's
function argument.
This is needed to separate ou
Enable kfence on book3s64 hash only when early init is enabled.
This is because, kfence could cause the kernel linear map to be mapped
at PAGE_SIZE level instead of 16M (which I guess we don't want).
Also currently there is no way to -
1. Make multiple page size entries for the SLB used for kernel
Both radix and hash on book3s requires to detect if kfence
early init is enabled or not. Hash needs to disable kfence
if early init is not enabled because with kfence the linear map is
mapped using PAGE_SIZE rather than 16M mapping.
We don't support multiple page sizes for slb entry used for kernel
sysfs_emit() helper function should be used when formatting the value
to be returned to user space.
This patch replaces open-coded sysfs_emit() in sysfs .show() callbacks
Link: https://github.com/KSPP/linux/issues/105
Signed-off-by: Paulo Miguel Almeida
---
arch/powerpc/platforms/ps3/system-bus
On Fri, Oct 18, 2024 at 10:38:43AM -0500, Segher Boessenkool wrote:
> On Fri, Oct 18, 2024 at 09:28:19PM +1300, Paulo Miguel Almeida wrote:
> > The C99 standard specifies that {v}snprintf() returns the length of the
> > data that *would have been* written if there were enough space.
>
> Not includ
Hello!
Is it possible to somehow debug crypto-nx errors and follow-up in
cryptomgr_test ?
System info is debian sid , running in LPAR on IBM S822 machine.
# uname -a
Linux redpanda 6.12.0-rc3 #119 SMP Thu Oct 17 23:47:18 MSK 2024 ppc64 GNU/Linux
# lscpu
Architecture: ppc64
CPU op-
copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
/proc/kcore can have some unmapped kfence objects which when read via
copy_from_kernel_nofault() can cause page faults. Since *_nofault()
functions define their own fixup table for handling fault, use that
instead of asking kf
Madhavan Srinivasan writes:
>
> Patchset looks fine to me.
>
> Reviewed-by: Madhavan Srinivasan for the series.
>
Thanks Maddy for the reviews!
I will spin PATCH v4 with these minor suggested changes (No code changes)
-ritesh
We anyway don't use any return values from fadump_cma_init(). Since
fadump_reserve_mem() from where fadump_cma_init() gets called today,
already has the required checks.
This patch makes this function return type as void. Let's also handle
extra cases like return if fadump_supported is false or dum
This patch refactors all CMA related initialization and alignment code
to within fadump_cma_init() which gets called in the end. This also means
that we keep [reserve_dump_area_start, boot_memory_size] page aligned
during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the
aligned
On Thu, 10 Oct 2024 20:21:06 +0530, Athira Rajeev wrote:
> perf list picks the events supported for specific platform
> from pmu-events/arch/powerpc/. Example power10 events
> are in pmu-events/arch/powerpc/power10, power9 events are part
> of pmu-events/arch/powerpc/power9. The decision of which
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Linus,
Please pull my first pullrequest for powerpc tree.
My gpg key is available in pgpkeys.git and it is signed by Michael Ellerman and
others.
https://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git/commit/?id=5931604633197aa5cdbf6c4c9de0f
On Thu, Oct 17, 2024 at 02:55:22PM -0700, Josh Poimboeuf wrote:
> SYM_FUNC_START(__put_user_2)
> check_range size=2
> ASM_STAC
> -3: movw %ax,(%_ASM_CX)
> +2: movw %ax,(%_ASM_CX)
> xor %ecx,%ecx
> ASM_CLAC
> RET
> SYM_FUNC_END(__put_user_2)
> EXPORT_SYMBOL(__put
Hi Mark,
Mark Brown writes:
> On Mon, 14 Oct 2024 10:22:17 +0200, Anna-Maria Behnsen wrote:
>> a question about which sleeping function should be used in acpi_os_sleep()
>> started a discussion and examination about the existing documentation and
>> implementation of functions which insert a sle
If the power supply type is not set it defaults to "Unknown" and upower
does not recognise it. In turn battery monitor applications do not see a
battery. Setting to POWER_SUPPLY_TYPE_BATTERY fixes this.
Signed-off-by: Ed Robbins
---
drivers/power/supply/pmu_battery.c | 1 +
1 file changed, 1 inse
The C99 standard specifies that {v}snprintf() returns the length of the
data that *would have been* written if there were enough space. In some
cases, this misunderstanding led to buffer-overruns in the past. It's
generally considered better/safer to use the {v}scnprintf() variants in
their place.
55 matches
Mail list logo