Based on Matthew Wilcox's patches for other architectures.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/string.h | 24
arch/powerpc/lib/mem_64.S | 19 ++-
2 files changed, 42 insertions(+), 1 deletion(-)
diff --git a/arch/po
Use the newly introduced memset32() to pre-fill BPF page(s) with trap
instructions.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
On 2017/03/28 11:44AM, Michael Ellerman wrote:
> "Naveen N. Rao" writes:
>
> > diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
> > index 85fa9869aec5..ec531de6 100644
> > --- a/arch/powerpc/lib/mem_64.S
> > +++ b/arch/
On 2017/03/29 10:36PM, Michael Ellerman wrote:
> "Naveen N. Rao" writes:
> > I also tested zram today with the command shared by Wilcox:
> >
> > without patch: 1.493782568 seconds time elapsed( +- 0.08% )
> > with patch: 1.4084575
/* Do real store operation to complete stwu */
Can you also update the above comment to refer to 'stdu'?
Apart from that, for this patch:
Reviewed-by: Naveen N. Rao
- Naveen
- lwz r5,GPR1(r1)
+ ld r5,GPR1(r1)
std r8,0(r5)
/* Clear _TIF_EMULATE_STACK_STORE flag */
--
1.9.3
v1:
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1334843.html
For v2, this series has been re-ordered and rebased on top of
powerpc/next so as to make it easier to resolve conflicts with -tip. No
other changes.
- Naveen
Naveen N. Rao (5):
kprobes: convert kprobe_lookup_name
ine+0x0[OPTIMIZED]
Acked-by: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 4 ++--
arch/powerpc/kernel/optprobes.c | 4 ++--
include/linux/kprobes.h | 2 +-
kernel/kprobes.c| 7 ---
4 files changed, 9 insertions(+), 8 dele
The macro is now pretty long and ugly on powerpc. In the light of
further changes needed here, convert it to a __weak variant to be
over-ridden with a nicer looking function.
Suggested-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/kprobes.h | 53
On kprobe handler re-entry, try to emulate the instruction rather than
single stepping always.
As a related change, remove the duplicate saving of msr as that is
already done in set_current_kprobe()
Acked-by: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel
This helper will be used in a subsequent patch to emulate instructions
on re-entering the kprobe handler. No functional change.
Acked-by: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 52 ++-
1 file changed
kprobe_lookup_name() is specific to the kprobe subsystem and may not
always return the function entry point (in a subsequent patch for
KPROBES_ON_FTRACE). For looking up function entry points, introduce a
separate helper and use the same in optprobes.c
Signed-off-by: Naveen N. Rao
---
arch
ither livepatch_handler()
nor ftrace_graph_caller() return back here.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/entry_64.S | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 6432d4bf08c8..8fd8718
without that patch.
- Naveen
Masami Hiramatsu (1):
kprobes: Skip preparing optprobe if the probe is ftrace-based
Naveen N. Rao (4):
powerpc: ftrace: minor cleanup
powerpc: ftrace: restore LR from pt_regs
powerpc: kprobes: add support for KPROBES_ON_FTRACE
powerpc: kprobes: prefer ftrace
044fc0 k kretprobe_trampoline+0x0[OPTIMIZED]
and after patch:
# cat ../kprobes/list
c00d074c k _do_fork+0xc[DISABLED][FTRACE]
c00412b0 k kretprobe_trampoline+0x0[OPTIMIZED]
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 17
.
Live patch and function graph continue to work fine with this change.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/entry_64.S | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 8fd8718722a1
From: Masami Hiramatsu
Skip preparing optprobe if the probe is ftrace-based, since anyway, it
must not be optimized (or already optimized by ftrace).
Tested-by: Naveen N. Rao
Signed-off-by: Masami Hiramatsu
---
Though this patch is generic, it is needed for KPROBES_ON_FTRACE to work
on
on the x86 code by Masami.
Signed-off-by: Naveen N. Rao
---
.../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kprobes.h | 10 ++
arch/powerpc/kernel/Makefile | 3
Split ftrace_64.S further retaining the core ftrace 64-bit aspects
in ftrace_64.S and moving ftrace_caller() and ftrace_graph_caller() into
separate files based on -mprofile-kernel. The livepatch routines are all
now contained within the mprofile file.
Signed-off-by: Naveen N. Rao
---
arch
ff-by: Naveen N. Rao
---
arch/powerpc/kernel/Makefile | 9 +-
arch/powerpc/kernel/entry_32.S| 107 ---
arch/powerpc/kernel/entry_64.S| 379 -
arch/powerpc/kernel/trace/Makefile| 24 ++
arch/powerpc/k
v3:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg114669.html
For v4, this has been rebased on top of powerpc/next as well as the
KPROBES_ON_FTRACE series. No other changes.
- Naveen
Naveen N. Rao (2):
powerpc: split ftrace bits into a separate file
powerpc: ftrace_64: split
/perf$ sudo cat /sys/kernel/debug/kprobes/list
c05f3b48 k read_mem+0x8[DISABLED]
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
v2:
- rebased on top of powerpc/next along with related kprobes patches
- removed incorrect blacklist of kretprobe_trampoline.
arch/powe
Excerpts from PrasannaKumar Muralidharan's message of April 5, 2017 11:21:
On 30 March 2017 at 12:46, Naveen N. Rao
wrote:
Also, with a simple module to memset64() a 1GB vmalloc'ed buffer, here
are the results:
generic:0.245315533 seconds time elapsed( +- 1.83% )
On 2017/04/13 12:02PM, Masami Hiramatsu wrote:
> Hi Naveen,
Hi Masami,
>
> BTW, I saw you sent 3 different series, are there any
> conflict each other? or can we pick those independently?
Yes, all these three patch series are based off powerpc/next and they do
depend on each other, as they are
On 2017/04/13 01:32PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:26 +0530
> "Naveen N. Rao" wrote:
>
> > kprobe_lookup_name() is specific to the kprobe subsystem and may not
> > always return the function entry point (in a subsequent patch for
> > K
On 2017/04/13 01:34PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:27 +0530
> "Naveen N. Rao" wrote:
>
> > This helper will be used in a subsequent patch to emulate instructions
> > on re-entering the kprobe handler. No functional change.
>
> In th
On 2017/04/13 01:37PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:28 +0530
> "Naveen N. Rao" wrote:
>
> > On kprobe handler re-entry, try to emulate the instruction rather than
> > single stepping always.
> >
>
> > As a related change,
Excerpts from Masami Hiramatsu's message of April 13, 2017 10:04:
On Wed, 12 Apr 2017 16:28:27 +0530
"Naveen N. Rao" wrote:
This helper will be used in a subsequent patch to emulate instructions
on re-entering the kprobe handler. No functional change.
In this case, please m
CER=y
+CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_SCHED_TRACER=y
+CONFIG_FTRACE_SYSCALLS=y
Any reason to not enable this for ppc64 and pseries defconfigs?
Apart from that, for this patch:
Acked-by: Naveen N. Rao
- Naveen
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENT=y
CONFIG_CODE_PATCHING_SELFTEST=y
diff --
Excerpts from David Laight's message of April 18, 2017 18:22:
From: Naveen N. Rao
Sent: 12 April 2017 11:58
...
+kprobe_opcode_t *kprobe_lookup_name(const char *name)
+{
...
+ char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
+ const char *modsym;
+ bool dot_app
On 2017/04/19 08:48AM, David Laight wrote:
> From: Naveen N. Rao
> > Sent: 19 April 2017 09:09
> > To: David Laight; Michael Ellerman
> > Cc: linux-ker...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Masami
> > Hiramatsu; Ingo Molnar
> > Subject: RE
ress review comments
from David Laight.
- Naveen
Naveen N. Rao (7):
kprobes: convert kprobe_lookup_name() to a function
powerpc: kprobes: fix handling of function offsets on ABIv2
kprobes: validate the symbol name length
powerpc: kprobes: use safer string functions in kprobe_lookup_name(
ine+0x0[OPTIMIZED]
Acked-by: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 4 ++--
arch/powerpc/kernel/optprobes.c | 4 ++--
include/linux/kprobes.h | 2 +-
kernel/kprobes.c| 7 ---
4 files changed, 9 insertions(+), 8 dele
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided field, let's ensure that the length of the string is
within expected limits.
Signed-off-by: Naveen N. Rao
---
include/linux/kprobes.h
The macro is now pretty long and ugly on powerpc. In the light of
further changes needed here, convert it to a __weak variant to be
over-ridden with a nicer looking function.
Suggested-by: Masami Hiramatsu
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm
On kprobe handler re-entry, try to emulate the instruction rather than
single stepping always.
Acked-by: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch
Convert usage of strncpy()/strncat() to memcpy()/strlcat() for simpler
and safer string manipulation.
Reported-by: David Laight
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 11 +--
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel
set_current_kprobe() already saves regs->msr into kprobe_saved_msr. Remove
the redundant save.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 067e9863b
No functional changes.
Acked-by: Ananth N Mavinakayanahalli
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 52 ++-
1 file changed, 31 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel
ither livepatch_handler()
nor ftrace_graph_caller() return back here.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/entry_64.S | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 6432d4bf08c8..8fd8718
on the x86 code by Masami.
Signed-off-by: Naveen N. Rao
---
.../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kprobes.h | 10 ++
arch/powerpc/kernel/Makefile | 3
as we crash on powerpc without that patch.
- Naveen
Masami Hiramatsu (1):
kprobes: Skip preparing optprobe if the probe is ftrace-based
Naveen N. Rao (5):
powerpc: ftrace: minor cleanup
powerpc: ftrace: restore LR from pt_regs
powerpc: kprobes: add support for KPROBES_ON_FTRACE
powerpc
From: Masami Hiramatsu
Skip preparing optprobe if the probe is ftrace-based, since anyway, it
must not be optimized (or already optimized by ftrace).
Tested-by: Naveen N. Rao
Signed-off-by: Masami Hiramatsu
---
kernel/kprobes.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions
.
Live patch and function graph continue to work fine with this change.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/entry_64.S | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 8fd8718722a1
kprobe_lookup_name() is specific to the kprobe subsystem and may not
always return the function entry point (in a subsequent patch for
KPROBES_ON_FTRACE). For looking up function entry points, introduce a
separate helper and use the same in optprobes.c
Signed-off-by: Naveen N. Rao
---
arch
044fc0 k kretprobe_trampoline+0x0[OPTIMIZED]
and after patch:
# cat ../kprobes/list
c00d074c k _do_fork+0xc[DISABLED][FTRACE]
c00412b0 k kretprobe_trampoline+0x0[OPTIMIZED]
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 17
, I'm posting these right away.
I'd especially appreciate a review of the first patch and feedback on
whether it does the right thing with/without relocation. My tests
didn't reveal any issues.
Thanks,
Naveen
Naveen N. Rao (2):
powerpc: kprobes: blacklist exception handlers
p
Introduce __head_end to mark end of the early fixed sections and use the
same to blacklist all exception handlers from kprobes.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/sections.h | 1 +
arch/powerpc/kernel/kprobes.c | 9 +
arch/powerpc/kernel/vmlinux.lds.S | 2
Blacklist all the exception common/OOL handlers as the kernel stack is
not yet setup, which means we can't take a trap at this point.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/head-64.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/include/asm/head-64
Excerpts from Masami Hiramatsu's message of April 19, 2017 20:07:
On Wed, 19 Apr 2017 18:21:02 +0530
"Naveen N. Rao" wrote:
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided fiel
werpc tree, I followed this since I
felt that Michael Ellerman prefers to keep functional changes separate
from refactoring. I'm fine with either approach.
Michael?
Thanks!
- Naveen
Thank you,
On Wed, 19 Apr 2017 18:21:05 +0530
"Naveen N. Rao" wrote:
On kprobe handler re-
Excerpts from Michael Ellerman's message of April 20, 2017 12:03:
"Naveen N. Rao" writes:
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 71286dfd76a0..59159337a097 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kpro
Excerpts from Michael Ellerman's message of April 20, 2017 11:38:
"Naveen N. Rao" writes:
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 6a128f3a7ed1..bb86681c8a10 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1382,6 +1382,28 @@ bool within_kprobe_blacklis
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided field, let's ensure that the length of the string is
within expected limits.
Signed-off-by: Naveen N. Rao
---
Masami, Michael,
Here's a
Convert usage of strchr()/strncpy()/strncat() to
strnchr()/memcpy()/strlcat() for simpler and safer string manipulation.
Reported-by: David Laight
Signed-off-by: Naveen N. Rao
---
Changes: Additionally convert the strchr().
arch/powerpc/kernel/kprobes.c | 13 ++---
1 file changed, 6
Excerpts from Paul Clarke's message of April 21, 2017 18:41:
a nit or two, below...
On 04/21/2017 07:32 AM, Naveen N. Rao wrote:
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 6a128f3a7ed1..ff9b1ac72a38 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1383,6 +1383,34 @@
Excerpts from Christophe Leroy's message of April 21, 2017 18:32:
This patch implements CONFIG_DEBUG_RODATA on PPC32.
As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
in order to allow page protection setup at the level of each page.
As BAT/LTLB mappings are deactivated, thei
Excerpts from Masami Hiramatsu's message of April 21, 2017 19:12:
On Wed, 19 Apr 2017 16:38:22 +0000
"Naveen N. Rao" wrote:
Excerpts from Masami Hiramatsu's message of April 19, 2017 20:07:
> On Wed, 19 Apr 2017 18:21:02 +0530
> "Naveen N. Rao" wrote:
>
21/2017 07:33 AM, Naveen N. Rao wrote:
Convert usage of strchr()/strncpy()/strncat() to
strnchr()/memcpy()/strlcat() for simpler and safer string manipulation.
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 97b5eed1f76d..c73fb6e3b43f 100644
--- a/arch/powe
Excerpts from Michael Ellerman's message of April 22, 2017 11:25:
"Naveen N. Rao" writes:
When a kprobe is being registered, we use the symbol_name field to
lookup the address where the probe should be placed. Since this is a
user-provided field, let's ensure that the len
Split ftrace_64.S further retaining the core ftrace 64-bit aspects
in ftrace_64.S and moving ftrace_caller() and ftrace_graph_caller() into
separate files based on -mprofile-kernel. The livepatch routines are all
now contained within the mprofile file.
Signed-off-by: Naveen N. Rao
---
arch
ff-by: Naveen N. Rao
---
arch/powerpc/kernel/Makefile | 9 +-
arch/powerpc/kernel/entry_32.S| 107 ---
arch/powerpc/kernel/entry_64.S| 378 -
arch/powerpc/kernel/trace/Makefile| 24 ++
arch/powerpc/k
Use safer string manipulation functions when dealing with a
user-provided string in kprobe_lookup_name().
Reported-by: David Laight
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 47 ++-
1 file changed, 20 insertions(+), 27 deletions
1. Fail early for invalid/zero length symbols.
2. Detect names of the form and skip checking for kernel
symbols in that case.
Signed-off-by: Naveen N. Rao
---
Masami, Michael,
I have added two very simple checks here, which I felt is good to have,
rather than the elaborate checks in the
once I expand my tests.
I have converted many labels into private -- these are labels that I
felt are not necessary to read stack traces. If any of those are
important to have, please let me know.
- Naveen
Naveen N. Rao (4):
powerpc/kprobes: cleanup system_call_common and blacklist it from
Convert some of the labels into private labels and blacklist
system_call_common() and system_call() from kprobes. We can't take a
trap at parts of these functions as either MSR_RI is unset or the
kernel stack pointer is not yet setup.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/k
It is actually safe to probe system_call() in entry_64.S, but only till
.Lsyscall_exit. To allow this, convert .Lsyscall_exit to a non-local
symbol __system_call() and blacklist that symbol, rather than
system_call().
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/entry_64.S | 24
Blacklist all functions invoked when we get a trap, through to the time
we invoke the kprobe handler.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/entry_64.S | 1 +
arch/powerpc/kernel/exceptions-64s.S | 1 +
arch/powerpc/kernel/time.c | 3 +++
arch/powerpc/kernel
Blacklist all functions involved when we return from a trap. We:
- convert some of the labels into private labels,
- remove the duplicate 'restore' label, and
- blacklist most functions involved during returning from a trap.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/entry
Excerpts from David Laight's message of April 25, 2017 22:06:
From: Naveen N. Rao
Sent: 25 April 2017 17:18
1. Fail early for invalid/zero length symbols.
2. Detect names of the form and skip checking for kernel
symbols in that case.
Signed-off-by: Naveen N. Rao
---
Masami, Michael,
I
Excerpts from Masami Hiramatsu's message of April 26, 2017 10:11:
On Tue, 25 Apr 2017 21:37:11 +0530
"Naveen N. Rao" wrote:
Use safer string manipulation functions when dealing with a
user-provided string in kprobe_lookup_name().
Reported-by: David Laight
Signed-off-by
Michael Ellerman wrote:
> "Naveen N. Rao" writes:
>> diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
>> index 6a3b249a2ae1..d134b060564f 100644
>> --- a/kernel/kallsyms.c
>> +++ b/kernel/kallsyms.c
>> @@ -205,6 +205,12 @@ unsigned long kallsyms_loo
On 2017/04/27 11:24AM, Masami Hiramatsu wrote:
> Hello Naveen,
>
> On Tue, 25 Apr 2017 22:04:05 +0530
> "Naveen N. Rao" wrote:
>
> > This is the second in the series of patches to build out an appropriate
> > kprobes blacklist. This series blacklists sys
private -- these are labels that I
felt are not necessary to read stack traces. If any of those are
important to have, please let me know.
- Naveen
Naveen N. Rao (3):
powerpc/kprobes: cleanup system_call_common and blacklist it from
kprobes
powerpc/kprobes: un-blacklist system_call() from
Convert some of the labels into private labels and blacklist
system_call_common() and system_call() from kprobes. We can't take a
trap at parts of these functions as either MSR_RI is unset or the
kernel stack pointer is not yet setup.
Reviewed-by: Masami Hiramatsu
Signed-off-by: Naveen N
Blacklist all functions involved while handling a trap. We:
- convert some of the labels into private labels,
- remove the duplicate 'restore' label, and
- blacklist most functions involved while handling a trap.
Reviewed-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
ar
It is actually safe to probe system_call() in entry_64.S, but only till
.Lsyscall_exit. To allow this, convert .Lsyscall_exit to a non-local
symbol __system_call() and blacklist that symbol, rather than
system_call().
Reviewed-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
arch/powerpc
On 2017/04/27 08:19PM, Michael Ellerman wrote:
> "Naveen N. Rao" writes:
>
> > It is actually safe to probe system_call() in entry_64.S, but only till
> > .Lsyscall_exit. To allow this, convert .Lsyscall_exit to a non-local
> > symbol __system_call() and
[Copying linuxppc-dev list which I missed cc'ing initially]
On 2017/05/03 03:58PM, Steven Rostedt wrote:
> On Wed, 3 May 2017 23:43:41 +0530
> "Naveen N. Rao" wrote:
>
> > This fixes a crash when function_graph and jprobes are used together.
> > This is esse
will be coding up and sending
across in a day or two.
This series has been run through ftrace selftests.
- Naveen
Naveen N. Rao (8):
powerpc/kprobes: Pause function_graph tracing during jprobes handling
powerpc/ftrace: Pass the correct stack pointer for
DYNAMIC_FTRACE_WITH_REGS
powerpc
obe_return(), which never returns back to the hook, but instead to
the original jprobe'd function. The solution is to momentarily pause
function_graph tracing before invoking the jprobe hook and re-enable it
when returning back to the original jprobe'd function.
Signed-off-by: Naveen N. Ra
. Also, use SAVE_10GPRS() to simplify the code.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
b/arch/powerpc/kernel/trace
r.
So, if NIP == R12, we know we came here due to jprobes and we just
branch to the new IP. Otherwise, we continue with livepatch processing
as usual.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 10 ++
1 file changed, 10 insertions(+)
diff --git
remove the redundant saving of LR in
ftrace_graph_caller() for similar reasons. It is sufficient to ensure
LR and r0 point to the new return address.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 4
1 file changed, 4 deletions(-)
diff --git a/arch/powerpc
first _20_ bytes of
a function.
However, ftrace_location_range() does an inclusive search and hence
passing (addr + 16) is still accurate.
Clarify the same by updating comments around this.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/livepatch.h | 4 ++--
arch/powerpc/kernel
e r14 for saving the original NIP and r15 for storing the
possibly modified NIP. r15 is later used to determine if the function
has been livepatched.
3. To re-use the same stack frame setup/teardown code, we have
ftrace_graph_caller() save the modified LR in pt_regs.
Signed-off-by: Naveen N
This is very handy to catch potential crashes due to unexpected
interactions of function_graph tracer with weird things like
jprobes.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/asm-prototypes.h | 3 ++-
arch/powerpc/include/asm/ftrace.h | 3 +++
arch/powerpc
HAVE_FUNCTION_GRAPH_FP_TEST reveals another area (apart from jprobes)
that conflicts with the function_graph tracer: xmon. This is due to the
use of longjmp() in various places in xmon.
To address this, pause function_graph tracing while in xmon.
Signed-off-by: Naveen N. Rao
---
arch/powerpc
On 2017/04/27 02:06PM, Naveen N. Rao wrote:
> v2 changes:
> - Patches 3 and 4 from the previous series have been merged.
> - Updated to no longer blacklist functions involved with stolen time
> accounting.
>
> v1:
> https://www.mail-archive.com/linuxppc-dev@lists.ozla
On 2017/05/04 04:03PM, Michael Ellerman wrote:
> "Naveen N. Rao" writes:
>
> > On 2017/04/27 08:19PM, Michael Ellerman wrote:
> >> "Naveen N. Rao" writes:
> >>
> >> > It is actually safe to probe system_call() in entry_64.
ing on rfi and mtmsr instructions (checked for in arch_prepare_kprobe).
Suggested-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
Michael,
I have named the new label system_call_exit so as to follow the
existing labels (system_call and system_call_common) and to not
conflict with the sy
Use safer string manipulation functions when dealing with a
user-provided string in kprobe_lookup_name().
Reported-by: David Laight
Signed-off-by: Naveen N. Rao
---
Changed to ignore return value of 0 from strscpy(), as suggested by
Masami.
- Naveen
arch/powerpc/kernel/kprobes.c | 47
On 2017/05/04 12:45PM, David Laight wrote:
> From: Naveen N. Rao [mailto:naveen.n@linux.vnet.ibm.com]
> > Sent: 04 May 2017 11:25
> > Use safer string manipulation functions when dealing with a
> > user-provided string in kprobe_lookup_name().
> >
> > Reported
frame header.
We introduce STACK_FRAME_PARM_SAVE to encode the offset of the parameter
save area from the stack frame pointer. Remove the similarly named
PARAMETER_SAVE_AREA_OFFSET in ptrace.c as those are currently not used
anywhere.
Signed-off-by: Naveen N. Rao
---
Michael,
I've set the lim
Fix a circa 2005 FIXME by implementing a check to ensure that we
actually got into the jprobe break handler() due to the trap in
jprobe_return().
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff
sed
re-enabling preemption if the instruction emulation was successful. Fix
those issues.
Fixes: 22d8b3dec214c ("powerpc/kprobes: Emulate instructions on kprobe
handler re-entry")
Signed-off-by: Naveen N. Rao
---
Michael,
Sorry for letting this slip through. Between when I first wrote t
On 2017/05/16 01:49PM, Balbir Singh wrote:
> arch_arm/disarm_probe use direct assignment for copying
> instructions, replace them with patch_instruction
Thanks for doing this!
We will also have to convert optprobes and ftrace to use
patch_instruction, but that can be done once the basic infrastr
On 2017/05/16 10:56AM, Anshuman Khandual wrote:
> On 05/16/2017 09:19 AM, Balbir Singh wrote:
> > patch_instruction is enhanced in this RFC to support
> > patching via a different virtual address (text_poke_area).
>
> Why writing instruction directly into the address is not
> sufficient and need t
Paolo Bonzini wrote:
The ARM and x86 architectures already use libdw, and it is useful to
have as much common code for the unwinder as possible. Porting PPC
to libdw only needs an architecture-specific hook to move the register
state from perf to libdw.
Thanks. Ravi has had a similar patch loc
On 2017/05/17 11:40AM, Balbir Singh wrote:
> On Tue, 2017-05-16 at 19:05 +0530, Naveen N. Rao wrote:
> > On 2017/05/16 01:49PM, Balbir Singh wrote:
> > > arch_arm/disarm_probe use direct assignment for copying
> > > instructions, replace them with patch_instruction
>
On 2017/05/25 04:57PM, Balbir Singh wrote:
> On Thu, 25 May 2017 13:36:42 +1000
> Balbir Singh wrote:
>
> > Enable STRICT_KERNEL_RWX for PPC64/BOOK3S
> >
> > These patches enable RX mappings of kernel text.
> > rodata is mapped RX as well as a trade-off, there
> > are more details in the patch d
801 - 900 of 1372 matches
Mail list logo