Hi Vishal,
Vishal Chourasia wrote:
On Fri, Jun 21, 2024 at 12:24:03AM +0530, Naveen N Rao wrote:
This is v3 of the patches posted here:
http://lkml.kernel.org/r/cover.1718008093.git.nav...@kernel.org
Since v2, I have addressed review comments from Steven and Masahiro
along with a few fixes. P
This is v4 of the series posted here:
http://lkml.kernel.org/r/cover.1718908016.git.nav...@kernel.org
This series reworks core ftrace support on powerpc to have the function
profiling sequence moved out of line. This enables us to have a single
nop at kernel function entry virtually eliminating
On powerpc, we would like to be able to make a pass on vmlinux.o and
generate a new object file to be linked into vmlinux. Add a generic pass
in Makefile.vmlinux that architectures can use for this purpose.
Architectures need to select CONFIG_ARCH_WANTS_PRE_LINK_VMLINUX and must
provide arch//tool
Function profile sequence on powerpc includes two instructions at the
beginning of each function:
mflrr0
bl ftrace_caller
The call to ftrace_caller() gets nop'ed out during kernel boot and is
patched in when ftrace is enabled.
Given the sequence, we cannot return from ftr
We are restricted to a .text size of ~32MB when using out-of-line
function profile sequence. Allow this to be extended up to the previous
limit of ~64MB by reserving space in the middle of .text.
A new config option CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE is
introduced to specify the number of f
Implement support for DYNAMIC_FTRACE_WITH_CALL_OPS similar to the
arm64 implementation.
This works by patching-in a pointer to an associated ftrace_ops
structure before each traceable function. If multiple ftrace_ops are
associated with a call site, then a special ftrace_list_ops is used to
enable
Add support for DYNAMIC_FTRACE_WITH_DIRECT_CALLS similar to the arm64
implementation.
ftrace direct calls allow custom trampolines to be called into directly
from function ftrace call sites, bypassing the ftrace trampoline
completely. This functionality is currently utilized by BPF trampolines
to
Add powerpc 32-bit and 64-bit samples for ftrace direct. This serves to
show the sample instruction sequence to be used by ftrace direct calls
to adhere to the ftrace ABI.
On 64-bit powerpc, TOC setup requires some additional work.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig
Add support for bpf_arch_text_poke() and arch_prepare_bpf_trampoline()
for 64-bit powerpc. While the code is generic, BPF trampolines are only
enabled on 64-bit powerpc. 32-bit powerpc will need testing and some
updates.
BPF Trampolines adhere to the existing ftrace ABI utilizing a
two-instruction
Gcc v5.x emits a 3-instruction sequence for -mprofile-kernel:
mflrr0
std r0, 16(r1)
bl _mcount
Gcc v6.x moved to a simpler 2-instruction sequence by removing the 'std'
instruction. The store saved the return address in the LR save area in
the caller stack frame
On 32-bit powerpc, gcc generates a three instruction sequence for
function profiling:
mflrr0
stw r0, 4(r1)
bl _mcount
On kernel boot, the call to _mcount() is nop-ed out, to be patched back
in when ftrace is actually enabled. The 'stw' instruction therefore is
Minor refactor for converting #ifdef to IS_ENABLED().
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/module_64.c | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
ind
Pointer to struct module is only relevant for ftrace records belonging
to kernel modules. Having this field in dyn_arch_ftrace wastes memory
for all ftrace records belonging to the kernel. Remove the same in
favour of looking up the module from the ftrace record address, similar
to other architectu
To simplify upcoming changes to ftrace, add a check to skip actual
instruction patching if the old and new instructions are the same. We
still validate that the instruction is what we expect, but don't
actually patch the same instruction again.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel
Move the ftrace stub used to cover inittext before _einittext so that it
is within kernel text, as seen through core_kernel_text(). This is
required for a subsequent change to ftrace.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/vmlinux.lds.S | 3 +--
1 file changed, 1 insertion(+), 2 del
Commit 61688a82e047 ("powerpc/bpf: enable kfunc call") enhanced
bpf_jit_emit_func_call_hlp() to handle calls out to module region, where
bpf progs are generated. The only difference now between
bpf_jit_emit_func_call_hlp() and bpf_jit_emit_func_call_rel() is in
handling of the initial pass where ta
Function tracer on powerpc can only work with vmlinux having a .text
size of up to ~64MB due to powerpc branch instruction having a limited
relative branch range of 32MB. Today, this is only detected on kernel
boot when ftrace is init'ed. Add a post-link script to check the size of
.text so that we
So far, we have relied on the fact that gcc supports both
-mprofile-kernel, as well as -fpatchable-function-entry, and clang
supports neither. Our Makefile only checks for CONFIG_MPROFILE_KERNEL to
decide which files to build. Clang has a feature request out [*] to
implement -fpatchable-function-en
Rather than hard-coding the offset into a function to be used to
determine if a kprobe is at function entry, use ftrace_location() to
determine the ftrace location within the function and categorize all
instructions till that offset to be function entry.
For functions that cannot be traced, we fal
I have switched to using my @kernel.org id for my contributions. Update
MAINTAINERS and mailmap to reflect the same.
Cc: Naveen N. Rao
Signed-off-by: Naveen N Rao
---
.mailmap| 2 ++
MAINTAINERS | 6 +++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/.mailmap b/.mailmap
i
Hari Bathini has been updating and maintaining the powerpc BPF JIT since
a while now. Christophe Leroy has been doing the same for 32-bit
powerpc. Add them as maintainers for the powerpc BPF JIT.
I am no longer actively looking into the powerpc BPF JIT. Change my role
to that of a reviewer so that
In read_handle(), of_get_address() may return NULL which is later
dereferenced. Fix this by adding NULL check.
Based on our customized static analysis tool, extract vulnerability
features[1], then match similar vulnerability features in this function.
[1] https://git.kernel.org/pub/scm/linux/kern
On Sun, Jul 14, 2024 at 08:14:04PM +0800, Ma Ke wrote:
> In read_handle(), of_get_address() may return NULL which is later
> dereferenced. Fix this by adding NULL check.
>
> Based on our customized static analysis tool, extract vulnerability
> features[1], then match similar vulnerability features
In read_handle(), of_get_address() may return NULL if getting address and
size of the node failed. When of_read_number() uses prop to handle
conversions between different byte orders, it could lead to a null pointer
dereference. Add NULL check to fix potential issue.
Found by static analysis.
Cc:
Hello:
This series was applied to netdev/net-next.git (main)
by Jakub Kicinski :
On Sun, 14 Jul 2024 01:53:31 +0300 you wrote:
> Breno's previous attempt at enabling COMPILE_TEST for the fsl_qbman
> driver (now included here as patch 5/5) triggered compilation warnings
> for large CONFIG_NR_CPUS
On Mon, Jul 15, 2024 at 10:54:42AM +0800, Ma Ke wrote:
> In read_handle(), of_get_address() may return NULL if getting address and
> size of the node failed. When of_read_number() uses prop to handle
> conversions between different byte orders, it could lead to a null pointer
> dereference. Add NUL
Hello,
Please review this patch and let me know if any changes are needed.
Thanks,
Gautam
Ma Ke writes:
> In read_handle(), of_get_address() may return NULL if getting address and
> size of the node failed. When of_read_number() uses prop to handle
> conversions between different byte orders, it could lead to a null pointer
> dereference. Add NULL check to fix potential issue.
>
> Foun
Le 14/07/2024 à 10:34, Naveen N Rao a écrit :
Hari Bathini has been updating and maintaining the powerpc BPF JIT since
a while now. Christophe Leroy has been doing the same for 32-bit
powerpc. Add them as maintainers for the powerpc BPF JIT.
I am no longer actively looking into the powerpc BP
On Sun Jul 14, 2024 at 6:27 PM AEST, Naveen N Rao wrote:
> Move the ftrace stub used to cover inittext before _einittext so that it
> is within kernel text, as seen through core_kernel_text(). This is
> required for a subsequent change to ftrace.
Hmm, is there a reason it was outside einittext any
30 matches
Mail list logo