-entry, and is unlikely to support
-mprofile-kernel.
Update our Makefile checks so that we pick up the correct files to build
once clang picks up support for -fpatchable-function-entry.
[*] https://github.com/llvm/llvm-project/issues/57031
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace
that we can detect this at build time, and break the build if
necessary.
We add a dependency on !COMPILE_TEST for CONFIG_HAVE_FUNCTION_TRACER so
that allyesconfig and other test builds can continue to work without
enabling ftrace.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig
f program since they all
work with the kernel TOC. We only need to do it if we have to call out
to a module function. So, guard TOC load/restore with appropriate
conditions.
Signed-off-by: Naveen N Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 61 +--
1 file changed
Move the ftrace stub used to cover inittext before _einittext so that it
is within kernel text, as seen through core_kernel_text(). This is
required for a subsequent change to ftrace.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/vmlinux.lds.S | 3 +--
1 file changed, 1 insertion(+), 2
To simplify upcoming changes to ftrace, add a check to skip actual
instruction patching if the old and new instructions are the same. We
still validate that the instruction is what we expect, but don't
actually patch the same instruction again.
Signed-off-by: Naveen N Rao
---
arch/po
architectures.
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/ftrace.h| 1 -
arch/powerpc/kernel/trace/ftrace.c | 49 +
arch/powerpc/kernel/trace/ftrace_64_pg.c | 69 ++--
3 files changed, 56 insertions(+), 63
Minor refactor for converting #ifdef to IS_ENABLED().
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/module_64.c | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
erefore is
not necessary unless ftrace is enabled. Nop it out during ftrace init.
When ftrace is enabled, we want the 'stw' so that stack unwinding works
properly. Perform the same within the ftrace handler, similar to 64-bit
powerpc.
Reviewed-by: Nicholas Piggin
Signed-off-by: Nav
nel boot.
With this change, we now use the same 2-instruction profiling sequence
with both -mprofile-kernel, as well as -fpatchable-function-entry on
64-bit powerpc.
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/trace/ftrace.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff
fall back to using a fixed
offset of 8 (two instructions) to categorize a probe as being at
function entry for 64-bit elfv2, unless we are using pcrel.
Acked-by: Masami Hiramatsu (Google)
Signed-off-by: Naveen N Rao
---
arch/powerpc/kernel/kprobes.c | 18 --
1 file changed, 8
that other cpus
execute isync (or some CSI) so that they don't go back into the
trampoline again.
Signed-off-by: Naveen N Rao
---
arch/powerpc/include/asm/ppc-opcode.h | 14 +
arch/powerpc/net/bpf_jit.h| 12 +
arch/powerpc/net/bpf_jit_comp.c | 842 +
Add powerpc 32-bit and 64-bit samples for ftrace direct. This serves to
show the sample instruction sequence to be used by ftrace direct calls
to adhere to the ftrace ABI.
On 64-bit powerpc, TOC setup requires some additional work.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig
pr3 that can then be tested on the
return path from the ftrace trampoline to branch into the direct caller.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ftrace.h| 15 +++
arch/powerpc/kernel/asm-offsets.c| 3 +
a
entry, we load from
this location and call into ftrace_ops->func().
For 64-bit powerpc, we ensure that the out-of-line stub area is
doubleword aligned so that ftrace_ops address can be updated atomically.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 1 +
arch
pport vmlinux .text
size up to ~64MB.
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig | 12
arch/powerpc/include/asm/ftrace.h | 6 --
arch/powerpc/kernel/trace/ftrace.c | 21 +
arch/powerpc/kernel/trace/ftrace_en
r2,r2,-16028
b ftrace_ool_stub_text_end+0x11b28
mfocrf r11,8
...
The associated stub:
:
mflrr0
bl ftrace_caller
mtlrr0
b kernel_clone+0xc
...
Signed-off-by: Naveen N Rao
---
arch/powerpc/Kconfig
//tools/Makefile with .arch.vmlinux.o target, which
will be invoked prior to the final vmlinux link step.
Signed-off-by: Naveen N Rao
---
arch/Kconfig | 6 ++
scripts/Makefile.vmlinux | 8
scripts/link-vmlinux.sh | 11 ---
3 files changed, 22 insertions(+), 3
structions during trampoline
attach/detach.
- Naveen
Naveen N Rao (17):
powerpc/trace: Account for -fpatchable-function-entry support by
toolchain
powerpc/kprobes: Use ftrace to determine if a probe is at function
entry
powerpc64/ftrace: Nop out additional 'std' instru
message in the kernel log:
trace_kprobe: Could not probe notrace function
update_sd_lb_stats.constprop.0
Signed-off-by: Naveen N Rao
---
v4: Use printk format specifier %ps with probe address to lookup the
symbol, as suggested by Masami.
kernel/trace/trace_kprobe.c | 4 ++--
1 file changed, 2 i
On Thu, Dec 14, 2023 at 08:02:10AM +0900, Masami Hiramatsu wrote:
> On Wed, 13 Dec 2023 20:09:14 +0530
> Naveen N Rao wrote:
>
> > Trying to probe update_sd_lb_stats() using perf results in the below
> > message in the kernel log:
> > trace_kprobe: Could not p
trace_kprobe: Could not probe notrace function update_sd_lb_stats.constprop.0
Signed-off-by: Naveen N Rao
---
v3: Remove tk parameter from within_notrace_func() as suggested by
Masami
kernel/trace/trace_kprobe.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/ke
Christophe Leroy wrote:
For that, create a 32 bits version of patch_imm64_load_insns()
and create a patch_imm_load_insns() which calls
patch_imm32_load_insns() on PPC32 and patch_imm64_load_insns()
on PPC64.
Adapt optprobes_head.S for PPC32. Use PPC_LL/PPC_STL macros instead
of raw ld/std, opt o
user maps a
> > relevant page via mmap(), instruction is replaced via mmap() code
> > path. But because Uprobe is invalid, entire mmap() operation can
> > not be stopped. In this case just print an error and continue.
> >
> > Signed-off-by: Ravi Bangoria
> > Acked-
CONFIG_PPC64, and I don't think we need to confirm if we're running on a
ISA V3.1 for the below check.
With that:
Acked-by: Naveen N. Rao
> +
> + if (ppc_inst_prefixed(auprobe->insn) && (addr & 0x3F) == 0x3C) {
> + pr_info_ratelimited("Cannot register a uprobe on 64 byte
> unaligned prefixed instruction\n");
> + return -EINVAL;
> + }
> +
- Naveen
On 2021/02/04 06:38PM, Naveen N. Rao wrote:
> On 2021/02/04 04:17PM, Ravi Bangoria wrote:
> > Don't allow Uprobe on 2nd word of a prefixed instruction. As per
> > ISA 3.1, prefixed instruction should not cross 64-byte boundary.
> > So don't allow Uprobe on su
On 2021/02/04 04:19PM, Ravi Bangoria wrote:
>
>
> On 2/4/21 4:17 PM, Ravi Bangoria wrote:
> > Don't allow Uprobe on 2nd word of a prefixed instruction. As per
> > ISA 3.1, prefixed instruction should not cross 64-byte boundary.
> > So don't allow Uprobe on such prefixed instruction as well.
> >
On 2021/02/04 04:17PM, Ravi Bangoria wrote:
> Don't allow Uprobe on 2nd word of a prefixed instruction. As per
> ISA 3.1, prefixed instruction should not cross 64-byte boundary.
> So don't allow Uprobe on such prefixed instruction as well.
>
> There are two ways probed instruction is changed in ma
++-
> 1 file changed, 8 insertions(+), 5 deletions(-)
Suggested-by: Naveen N. Rao
Acked-by: Naveen N. Rao
Thanks,
Naveen
Masami Hiramatsu wrote:
On Tue, 5 Jan 2021 19:01:56 +0900
Masami Hiramatsu wrote:
On Tue, 5 Jan 2021 12:27:30 +0530
"Naveen N. Rao" wrote:
> Not all symbols are blacklisted on powerpc. Disable multiple_kprobes
> test until that is sorted, so that rest of ftrace and kprobe
Masami Hiramatsu wrote:
On Tue, 5 Jan 2021 12:27:30 +0530
"Naveen N. Rao" wrote:
Not all symbols are blacklisted on powerpc. Disable multiple_kprobes
test until that is sorted, so that rest of ftrace and kprobe selftests
can be run.
This looks good to me, but could you try t
Not all symbols are blacklisted on powerpc. Disable multiple_kprobes
test until that is sorted, so that rest of ftrace and kprobe selftests
can be run.
Signed-off-by: Naveen N. Rao
---
.../testing/selftests/ftrace/test.d/kprobe/multiple_kprobes.tc | 2 +-
1 file changed, 1 insertion(+), 1
Arnaldo Carvalho de Melo wrote:
Em Fri, Dec 18, 2020 at 08:08:56PM +0530, Naveen N. Rao escreveu:
Hi Arnaldo,
Arnaldo Carvalho de Melo wrote:
> Em Fri, Dec 18, 2020 at 08:26:59AM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Fri, Dec 18, 2020 at 03:59:23PM +0800, Tiezhu Yang
we get
oops s/s390/powerpc/g :-)
notified when the copy drifts, so that we can see if it still continues
working and we can get new syscalls to be supported in things like 'perf
trace'?
Yes, this looks good to me:
Reviewed-by: Naveen N. Rao
FWIW, I had posted a similar patch back in Apr
Steven Rostedt wrote:
On Thu, 26 Nov 2020 23:38:38 +0530
"Naveen N. Rao" wrote:
On powerpc, kprobe-direct.tc triggered FTRACE_WARN_ON() in
ftrace_get_addr_new() followed by the below message:
Bad trampoline accounting at: 4222522f (wake_up_process+0xc/0x20)
(f001)
text.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 75 +-
1 file changed, 33 insertions(+), 42 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index 14b39f7797d455..7ddb6e4b527c39 100644
--- a
, and it isn't evident that the graph caller has too
deep a call stack to cause issues.
Signed-off-by: Naveen N. Rao
---
.../powerpc/kernel/trace/ftrace_64_mprofile.S | 28 +--
1 file changed, 7 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/
Add a simple powerpc trampoline to demonstrate use of ftrace direct on
powerpc.
Signed-off-by: Naveen N. Rao
---
samples/Kconfig | 2 +-
samples/ftrace/ftrace-direct-modify.c | 58 +++
samples/ftrace/ftrace-direct-too.c| 48
We currently assume that ftrace locations are patched to go to either
ftrace_caller or ftrace_regs_caller. Drop this assumption in preparation
for supporting ftrace direct calls.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 107 +++--
1 file
.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ftrace.h | 14 ++
arch/powerpc/kernel/trace/ftrace.c| 140 +-
.../powerpc/kernel/trace/ftrace_64_mprofile.S | 40 -
4 files changed, 182
, this is not required. Drop it.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index bbe871b47ade58..c5602e9b07faa3
Use FTRACE_REGS_ADDR instead of keying off
CONFIG_DYNAMIC_FTRACE_WITH_REGS to identify the proper ftrace trampoline
address to use.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc
ftrace_plt_tramps[] was intended to speed up skipping plt branches, but
the code wasn't completed. It is also not significantly better than
reading and decoding the instruction. Remove the same.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 8
1 file chang
Add register_get_kernel_argument() for a rudimentary way to access
kernel function arguments.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ptrace.h | 31 +++
2 files changed, 32 insertions(+)
diff --git a/arch
We need to remove hash entry if register_ftrace_function() fails.
Consolidate the cleanup to be done after register_ftrace_function() at
the end.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/trace/ftrace.c b
DYNAMIC_FTRACE_WITH_DIRECT_CALLS should depend on
DYNAMIC_FTRACE_WITH_REGS since we need ftrace_regs_caller().
Fixes: 763e34e74bb7d5c ("ftrace: Add register_ftrace_direct()")
Signed-off-by: Naveen N. Rao
---
kernel/trace/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Architectures may want to do some validation (such as to ensure that the
trampoline code is reachable from the provided ftrace location) before
accepting ftrace direct registration. Add helpers for the same.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 2 ++
kernel/trace/ftrace.c
t
capture all trampolines.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 ---
kernel/trace/ftrace.c | 84 ++
2 files changed, 4 insertions(+), 85 deletions(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 1bd3a0356ae4
ect module is going away. This
happens because we are checking if any ftrace_ops has the
FTRACE_FL_TRAMP flag set _before_ updating the filter hash.
The fix for this is to look for any _other_ ftrace_ops that also needs
FTRACE_FL_TRAMP.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c |
upstream issue since I am able to reproduce the lockup without these
patches. I will be looking into that to see if I can figure out the
cause of those lockups.
In the meantime, I would appreciate a review of these patches.
- Naveen
Naveen N. Rao (14):
ftrace: Fix updating FTRACE_FL_TRAMP
Hi Steven,
Steven Rostedt wrote:
From: "Steven Rostedt (VMware)"
In preparation to have arguments of a function passed to callbacks attached
to functions as default, change the default callback prototype to receive a
struct ftrace_regs as the forth parameter instead of a pt_regs.
For callback
[+ Maddy]
Leo Yan wrote:
If user specifies event type "ldst", PowerPC's perf_mem_events__name()
will wrongly return the store event name "cpu/mem-stores/".
This patch changes to return NULL for the event "ldst" on PowerPC.
Signed-off-by: Leo Yan
---
tools/perf/arch/powerpc/util/mem-events.c
.
Acked-by: Naveen N. Rao
- Naveen
probes.rst
Adjust the entry to the new file location.
Signed-off-by: Lukas Bulwahn
---
Naveen, Masami-san, please ack.
Jonathan, please pick this minor non-urgent patch into docs-next.
applies cleanly on next-20200724
Ah, sorry. Hadn't noticed this change from Mauro.
Acked-by: Naveen
Kprobes references are currently listed right after kretprobes example,
and appears to be part of the same section. Move this out to a separate
appendix for clarity.
Signed-off-by: Naveen N. Rao
---
Documentation/staging/kprobes.rst | 14 +-
1 file changed, 9 insertions(+), 5
Kprobes contitutes a dynamic tracing technology and as such can be
moved alongside documentation of other tracing technologies.
Signed-off-by: Naveen N. Rao
---
Documentation/staging/index.rst | 1 -
Documentation/trace/index.rst| 1 +
Documentation/{staging
This series updates some of the URLs in the kprobes document and moves
the same under trace/ directory.
- Naveen
Naveen N. Rao (3):
docs: staging/kprobes.rst: Update some of the references
docs: staging/kprobes.rst: Move references to a separate appendix
docs: Move kprobes.rst from
Some of the kprobes references are not valid anymore. Update the URLs to
point to their changed locations, where appropriate. Drop two URLs which
do not exist anymore.
Reported-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
Documentation/staging/kprobes.rst | 6 ++
1 file changed, 2
Masami Hiramatsu wrote:
On Tue, 14 Jul 2020 00:02:49 +0200
"Alexander A. Klimov" wrote:
Am 13.07.20 um 16:20 schrieb Masami Hiramatsu:
> Hi Naveen and Alexander,
>
> On Fri, 10 Jul 2020 19:14:47 +0530
> "Naveen N. Rao" wrote:
>
>> Masami Hirama
Masami Hiramatsu wrote:
On Tue, 7 Jul 2020 21:49:59 +0200
"Alexander A. Klimov" wrote:
Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.
Deterministic algorithm:
For each file:
If not .svg:
For each line:
If
ed-by: Sandipan Das
Leo, Naveen, can you comment on this?
Shoot -- this is a bad miss, I should have caught it. FWIW:
Reviewed-by: Naveen N. Rao
Thanks,
Naveen
Masami Hiramatsu wrote:
On Fri, 1 May 2020 17:37:56 +0200
Mauro Carvalho Chehab wrote:
There are several files that I was unable to find a proper place
for them, and 3 ones that are still in plain old text format.
Let's place those stuff behind the carpet, as we'd like to keep the
root direc
Michael Ellerman wrote:
"Naveen N. Rao" writes:
Michael Ellerman wrote:
"Gautham R. Shenoy" writes:
From: "Gautham R. Shenoy"
Currently on Pseries Linux Guests, the offlined CPU can be put to one
of the following two states:
- Long term processor c
Michael Ellerman wrote:
"Gautham R. Shenoy" writes:
From: "Gautham R. Shenoy"
Currently on Pseries Linux Guests, the offlined CPU can be put to one
of the following two states:
- Long term processor cede (also called extended cede)
- Returned to the Hypervisor via RTAS "stop-self" call.
telimited("Breakpoint hit on instruction that can't be emulated.
"
"Breakpoint at 0x%lx will be disabled.\n",
addr);
Otherwise:
Acked-by: Naveen N. Rao
- Naveen
+ goto disable;
+ }
/* Do not emulate user-space instructions, instead single-step them */
] return_to_handler+0x0/0x40
(vfs_read+0xb8/0x1b0)
[c000d1e33dd0] [c006ab58] return_to_handler+0x0/0x40
(ksys_read+0x7c/0x140)
[c000d1e33e20] [c006ab58] return_to_handler+0x0/0x40
(system_call+0x5c/0x68)
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/process.c
This associates entries in the ftrace_ret_stack with corresponding stack
frames, enabling more robust stack unwinding. Also update the only user
of ftrace_graph_ret_addr() to pass the stack pointer.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/asm-prototypes.h | 3 ++-
arch
Enable HAVE_FUNCTION_GRAPH_RET_ADDR_PTR for more robust stack unwinding
when function graph tracer is in use. Convert powerpc show_stack() to
use ftrace_graph_ret_addr() for better stack unwinding.
- Naveen
Naveen N. Rao (3):
ftrace: Look up the address of return_to_handler() using helpers
This ensures that we use the right address on architectures that use
function descriptors.
Signed-off-by: Naveen N. Rao
---
kernel/trace/fgraph.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 8dfd5021b933
Ravi Bangoria wrote:
On Powerpc64, watchpoint match range is double-word granular. On
a watchpoint hit, DAR is set to the first byte of overlap between
actual access and watched range. And thus it's quite possible that
DAR does not point inside user specified range. Ex, say user creates
a watchpo
g.h |5 +
kernel/kprobes.c|3 ++-
2 files changed, 7 insertions(+), 1 deletion(-)
Acked-by: Naveen N. Rao
- Naveen
Steven Rostedt wrote:
On Thu, 4 Jul 2019 20:04:41 +0530
"Naveen N. Rao" wrote:
kernel/trace/ftrace.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 7b037295a1f1..0791eafb693d 100644
--- a/kernel/trace/ftrace.c
+++ b/ke
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 0a56e0603fa13af08816d673f6f71b68cda2fb2e
Gitweb:
https://git.kernel.org/tip/0a56e0603fa13af08816d673f6f71b68cda2fb2e
Author:Naveen N. Rao
AuthorDate:Tue, 27 Aug 2019 12:44:58 +05:30
cccd0 ("y2038: rename old time and utime syscalls")
commit 00bf25d693e7 ("y2038: use time32 syscall names on 32-bit")
commit 8dabe7245bbc ("y2038: syscalls: rename y2038 compat syscalls")
commit 0d6040d46817 ("arch: add split IPC system calls where needed"
Jiong Wang wrote:
Naveen N. Rao writes:
Since BPF constant blinding is performed after the verifier pass, the
ALU32 instructions inserted for doubleword immediate loads don't have a
corresponding zext instruction. This is causing a kernel oops on powerpc
and can be reproduced by ru
_harden=2.
Fix this by emitting BPF_ZEXT during constant blinding if
prog->aux->verifier_zext is set.
Fixes: a4b1d3c1ddf6cb ("bpf: verifier: insert zero extension according to
analysis result")
Reported-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
Changes since RFC:
Jiong Wang wrote:
Michael Ellerman writes:
"Naveen N. Rao" writes:
Since BPF constant blinding is performed after the verifier pass, there
are certain ALU32 instructions inserted which don't have a corresponding
zext instruction inserted after. This is causing a kernel oops
Naveen N. Rao wrote:
Since BPF constant blinding is performed after the verifier pass, there
are certain ALU32 instructions inserted which don't have a corresponding
zext instruction inserted after. This is causing a kernel oops on
powerpc and can be reproduced by running 'test_cgro
Nick Desaulniers wrote:
Reported-by: Sedat Dilek
Suggested-by: Josh Poimboeuf
Signed-off-by: Nick Desaulniers
---
Acked-by: Naveen N. Rao
- Naveen
Jisheng Zhang wrote:
This patch implements KPROBES_ON_FTRACE for arm64.
~ # mount -t debugfs debugfs /sys/kernel/debug/
~ # cd /sys/kernel/debug/
/sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events
before the patch:
/sys/kernel/debug # cat kprobes/list
ff801009ff7c k _do_fork+0
Jisheng Zhang wrote:
For KPROBES_ON_FTRACE case, we need to adjust the kprobe's addr
correspondingly.
Signed-off-by: Jisheng Zhang
---
kernel/kprobes.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 9873fc627d61..f8400753a8a9 100644
--- a/kerne
pf_jit_harden=2.
Fix this by emitting BPF_ZEXT during constant blinding if
prog->aux->verifier_zext is set.
Fixes: a4b1d3c1ddf6cb ("bpf: verifier: insert zero extension according to
analysis result")
Reported-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
This approach (t
Naveen N. Rao wrote:
Two patches addressing bugs in ftrace function probe handling. The first
patch addresses a NULL pointer dereference reported by LTP tests, while
the second one is a trivial patch to address a missing check for return
value, found by code inspection.
Steven,
Can you
In register_ftrace_function_probe(), we are not checking the return
value of alloc_and_copy_ftrace_hash(). The subsequent call to
ftrace_match_records() may end up dereferencing the same. Add a check to
ensure this doesn't happen.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.
up seeing a NULL
filter_hash.
Fix this by just checking for a NULL filter_hash in t_probe_next(). If
the filter_hash is NULL, then this probe is just being added and we can
simply return from here.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c | 4
1 file changed, 4 insertions(+
Two patches addressing bugs in ftrace function probe handling. The first
patch addresses a NULL pointer dereference reported by LTP tests, while
the second one is a trivial patch to address a missing check for return
value, found by code inspection.
- Naveen
Naveen N. Rao (2):
ftrace: Fix
Steven Rostedt wrote:
On Thu, 27 Jun 2019 20:58:20 +0530
"Naveen N. Rao" wrote:
> But interesting, I don't see a synchronize_rcu_tasks() call
> there.
We felt we don't need it in this case. We patch the branch to ftrace
with a nop first. Other cpus should see t
Hi Steven,
Thanks for the review!
Steven Rostedt wrote:
On Thu, 27 Jun 2019 16:53:52 +0530
"Naveen N. Rao" wrote:
With -mprofile-kernel, gcc emits 'mflr r0', followed by 'bl _mcount' to
enable function tracing and profiling. So far, with dynamic ftrace, we
use
Naveen N. Rao wrote:
With -mprofile-kernel, gcc emits 'mflr r0', followed by 'bl _mcount' to
enable function tracing and profiling. So far, with dynamic ftrace, we
used to only patch out the branch to _mcount(). However, mflr is
executed by the branch unit that can only exec
Steven Rostedt wrote:
On Thu, 27 Jun 2019 16:53:50 +0530
"Naveen N. Rao" wrote:
In commit a0572f687fb3c ("ftrace: Allow ftrace_replace_code() to be
schedulable), the generic ftrace_replace_code() function was modified to
accept a flags argument in place of a single 'enable
Naveen N. Rao wrote:
In commit a0572f687fb3c ("ftrace: Allow ftrace_replace_code() to be
schedulable), the generic ftrace_replace_code() function was modified to
accept a flags argument in place of a single 'enable' flag. However, the
x86 version of this function was not update
up ftrace filter IP. This won't work if the address points to any
instruction apart from the one that has a branch to _mcount(). To
resolve this, have [dis]arm_kprobe_ftrace() use ftrace_function() to
identify the filter IP.
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
k
uction is indeed the
'mflr r0'. Earlier -mprofile-kernel ABI included a 'std r0,stack'
instruction between the 'mflr r0' and the 'bl _mcount'. This is harmless
as the 'std r0,stack' instruction is inconsequential and is not relied
upon.
Suggeste
: a0572f687fb3c ("ftrace: Allow ftrace_replace_code() to be schedulable")
Signed-off-by: Naveen N. Rao
---
arch/x86/kernel/ftrace.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 0927bb158ffc..f34005a17051 100
nop
out the preceding mflr.
- Naveen
Naveen N. Rao (7):
ftrace: Expose flags used for ftrace_replace_code()
x86/ftrace: Fix use of flags in ftrace_replace_code()
ftrace: Expose __ftrace_replace_code()
powerpc/ftrace: Additionally nop out the preceding mflr with
-mprofile-kernel
ftra
Since ftrace_replace_code() is a __weak function and can be overridden,
we need to expose the flags that can be set. So, move the flags enum to
the header file.
Reviewed-by: Steven Rostedt (VMware)
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 +
kernel/trace/ftrace.c | 5
nip to the pre and post probe handlers.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes-ftrace.c | 32 +++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/kprobes-ftrace.c
b/arch/powerpc/kernel/kprobes-ftrace.c
index 97
then patch in the branch to _mcount(). We override
ftrace_replace_code() with a powerpc64 variant for this purpose.
Suggested-by: Nicholas Piggin
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 258 ++---
1 file chang
While over-riding ftrace_replace_code(), we still want to reuse the
existing __ftrace_replace_code() function. Rename the function and
make it available for other kernel code.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 8
2 files changed, 5
Fixes: c7d64b560ce80 ("powerpc/ftrace: Enable C Version of recordmcount")
Signed-off-by: Naveen N. Rao
---
scripts/recordmcount.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h
index 13c5e6c8829c..47fca2c69a73 100644
--- a/script
Masami Hiramatsu wrote:
On Tue, 18 Jun 2019 20:17:06 +0530
"Naveen N. Rao" wrote:
With KPROBES_ON_FTRACE, kprobe is allowed to be inserted on instructions
that branch to _mcount (referred to as ftrace location). With
-mprofile-kernel, we now include the preceding 'mflr r0
1 - 100 of 723 matches
Mail list logo