the powerpc BPF JIT over to
PPC_BCC_SHORT() where we know the branch range.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 8
arch/powerpc/net/bpf_jit_comp64.c | 8
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/net
paths.
Reported-by: Jordan Niethe
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 2 ++
arch/powerpc/net/bpf_jit_comp.c | 22 +-
arch/powerpc/net/bpf_jit_comp32.c | 7 +--
arch/powerpc/net/bpf_jit_comp64.c | 7 +--
4 files changed, 33 insertions
y introduced in ISA v2.06. Guard use of
the same and implement an alternative approach for older processors.
Acked-by: Johan Almbladh
Tested-by: Johan Almbladh
Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended
BPF")
Reported-by: Johan Almbladh
Signed-off-b
Instead of saving and restoring LR before each invocation to
bpf_stf_barrier(), set SEEN_FUNC flag so that we save/restore LR in
prologue/epilogue.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch
In preparation for preserving kernel toc in r2, switch BPF_REG_AX from
r2 to r12. r12 is not used by bpf JIT except during external helper/bpf
calls, or with BPF_NOSPEC. These sequences aren't emitted when
BPF_REG_AX is used for constant blinding and other purposes.
Signed-off-by: Naveen N
In preparation for using kernel TOC, load the same in r2 on entry. With
elfv1, the kernel TOC is already setup by our caller so we just emit a
nop. We adjust the number of instructions to skip on a tail call
accordingly.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 8
ff-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 2 +-
arch/powerpc/net/bpf_jit_comp.c | 4 +++-
arch/powerpc/net/bpf_jit_comp32.c | 8 +--
arch/powerpc/net/bpf_jit_comp64.c | 39 ---
4 files changed, 30 insertions(+), 23 deletions(-)
diff --git a
the initial instruction setting up r2 in case of BPF function
calls, since we already have the kernel TOC setup in r2.
Reported-by: Anton Blanchard
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 30 +-
1 file changed, 13 insertions(+), 17
Hi Christophe,
Sorry for the delay, catching up with some of the earlier emails now..
Christophe Leroy wrote:
Hi Naveen,
Le 16/10/2018 à 22:25, Naveen N. Rao a écrit :
...
+/*
+ * If this is a compiler generated long_branch trampoline (essentially, a
+ * trampoline that has a branch to
Christophe Leroy wrote:
Le 04/10/2021 à 20:24, Naveen N. Rao a écrit :
Christophe Leroy wrote:
Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
In some scenarios, it is possible that the program epilogue is outside
the branch range for a BPF_EXIT instruction. Instead of rejecting such
Christophe Leroy wrote:
When the BPF routine doesn't call any function, the non volatile
registers can be reallocated to volatile registers in order to
avoid having to save them/restore on the stack.
Before this patch, the test #359 ADD default X is:
0: 7c 64 1b 78 mr r4,r3
4:
this:
Acked-by: Naveen N. Rao
Thanks,
Naveen
Jordan Niethe wrote:
Prepare for doing commit 40272035e1d0 ("powerpc/bpf: Reallocate BPF
registers to volatile registers when possible on PPC32") on PPC64 in a
later patch. Instead of directly accessing the const b2p[] array for
mapping bpf to ppc registers use bpf_to_ppc() which allows per struc
Christophe Leroy wrote:
Le 27/07/2021 à 08:55, Jordan Niethe a écrit :
Implement commit 40272035e1d0 ("powerpc/bpf: Reallocate BPF registers to
volatile registers when possible on PPC32") for PPC64.
When the BPF routine doesn't call any function, the non volatile
registers can be reallocated
s for security_ftr_enabled() and related
helpers when the config option is not enabled.
Reported-by: kernel test robot
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/security_features.h | 15 +++
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/in
Christophe Leroy wrote:
Le 06/01/2022 à 12:45, Naveen N. Rao a écrit :
task_pt_regs() can return NULL on powerpc for kernel threads. This is
then used in __bpf_get_stack() to check for user mode, resulting in a
kernel oops. Guard against this by checking return value of
task_pt_regs() before
Christophe Leroy wrote:
Le 06/01/2022 à 12:45, Naveen N. Rao a écrit :
Pad instructions emitted for BPF_CALL so that the number of instructions
generated does not change for different function addresses. This is
especially important for calls to other bpf functions, whose address
will only be
Christophe Leroy wrote:
Le 06/01/2022 à 12:45, Naveen N. Rao a écrit :
These instructions are updated after the initial JIT, so redo codegen
during the extra pass. Rename bpf_jit_fixup_subprog_calls() to clarify
that this is more than just subprog calls.
Fixes: 69c087ba6225b5 ("bpf
Christophe Leroy wrote:
Le 06/01/2022 à 12:45, Naveen N. Rao a écrit :
In preparation for using kernel TOC, load the same in r2 on entry. With
elfv1, the kernel TOC is already setup by our caller so we just emit a
nop. We adjust the number of instructions to skip on a tail call
accordingly
cated regardless of whether SEEN_FUNC is set or not.
Suggested-by: Naveen N. Rao
Signed-off-by: Christophe Leroy
---
arch/powerpc/net/bpf_jit.h| 3 ---
arch/powerpc/net/bpf_jit_comp32.c | 14 +++---
2 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/net/bpf_
Christophe Leroy wrote:
Le 14/01/2022 à 08:58, Naveen N. Rao a écrit :
Christophe Leroy wrote:
BPF_REG_5, BPF_REG_AX and TMP_REG are mapped on non volatile registers
because there are not enough volatile registers, but they don't need
to be preserved on function calls.
So when some vol
Christophe Leroy wrote:
Le 11/01/2022 à 15:35, Christophe Leroy a écrit :
Le 11/01/2022 à 11:31, Naveen N. Rao a écrit :
Christophe Leroy wrote:
Le 06/01/2022 à 12:45, Naveen N. Rao a écrit :
In preparation for using kernel TOC, load the same in r2 on entry. With
elfv1, the kernel TOC
endif
-
Given that we should not be probing syscall instructions, I think it is
better to return -1 for these two, similar to the RFI below. With that
change, for this patch:
Acked-by: Naveen N. Rao
case RFI:
return -1;
#endif
Thanks,
Naveen
--
[PATCH] powerpc/upro
[Sorry if you receive this in duplicate. Resending since this message
didn't hit the list]
On 2022-01-25 11:23, Christophe Leroy wrote:
Le 25/01/2022 à 04:04, Nicholas Piggin a écrit :
+Naveen (sorry missed cc'ing you at first)
Excerpts from Christophe Leroy's message of January 24, 2022 4:3
On 2022-01-27 13:09, Nicholas Piggin wrote:
Excerpts from naverao1's message of January 25, 2022 8:48 pm:
On 2022-01-25 11:23, Christophe Leroy wrote:
Le 25/01/2022 à 04:04, Nicholas Piggin a écrit :
+Naveen (sorry missed cc'ing you at first)
Excerpts from Christophe Leroy's message of Januar
Yes, let me come up with a better, more complete patch for this.
Signed-off-by: Naveen N. Rao
[np: Switch to pr_info_ratelimited]
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kernel/uprobes.c | 6 ++
2 files changed, 7 inserti
ds on some of the other BPF JIT fixes and enhancements
posted previously, as well as on ftrace direct enablement on powerpc
which has also been posted in the past.
- Naveen
Naveen N. Rao (3):
ftrace: Add ftrace_location_lookup() to lookup address of ftrace
location
powerpc/ftrace: Ove
location and to return the exact address of the
same.
Convert some uses of ftrace_location() in BPF infrastructure to the new
function.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 +
kernel/bpf/trampoline.c | 27 +--
kernel/trace/ftrace.c | 14
With CONFIG_MPROFILE_KERNEL, ftrace location is within the first 5
instructions of a function. Override ftrace_location_lookup() to search
within this range for the ftrace location.
Also convert kprobe_lookup_name() to utilize this function.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel
the powerpc64 -mprofile-kernel ABI since these
need to attach to ftrace locations using ftrace direct attach. Due to
this, bpf_arch_text_poke() patches two instructions: 'mflr r0' and 'bl'
for BPF_MOD_CALL. The trampoline code itself closely follows the x86
implementation.
Sig
Steven Rostedt wrote:
On Mon, 7 Feb 2022 12:37:21 +0530
"Naveen N. Rao" wrote:
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -1137,3 +1137,14 @@ char *arch_ftrace_match_adjust(char *str, const char
*search)
return str;
Steven Rostedt wrote:
On Wed, 09 Feb 2022 17:50:09 +
"Naveen N. Rao" wrote:
However, I think we will not be able to use a fixed range. I would like
to reserve instructions from function entry till the branch to
_mcount(), and it can be two or four instructions depending on
Steven Rostedt wrote:
On Thu, 10 Feb 2022 13:58:29 +
"Naveen N. Rao" wrote:
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index f9feb197b2daaf..68f20cf34b0c47 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1510,6 +1510,7 @@ ftrace_ops_t
Steven Rostedt wrote:
On Thu, 10 Feb 2022 16:40:28 +
"Naveen N. Rao" wrote:
The other option is to mark ftrace_cmp_recs() as a __weak function, but
I have a vague recollection of you suggesting #ifdef rather than a
__weak function in the past. I might be mis-remembering, so if
Alexey Kardashevskiy wrote:
Disables CONFIG_FTRACE_MCOUNT_USE_RECORDMCOUNT as CONFIG_HAS_LTO_CLANG
depends on it being disabled. In order to avoid disabling way too many
options (like DYNAMIC_FTRACE/FUNCTION_TRACER), this converts
FTRACE_MCOUNT_USE_RECORDMCOUNT from def_bool to bool.
+CONFI
the toc load for elf v2.
Patches 10-17 are new to this series and are largely some cleanups to
the bpf code on powerpc.
- Naveen
Jordan Niethe (1):
powerpc64/bpf: Store temp registers' bpf to ppc mapping
Naveen N. Rao (16):
powerpc/bpf: Skip branch range validation during first pass
pass after addrs[] is setup properly.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index b20a2a83a6e75b..9cdd33d6be4cc0 100644
--- a/arch/powerpc/net
the powerpc BPF JIT over to
PPC_BCC_SHORT() where we know the branch range.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 8
arch/powerpc/net/bpf_jit_comp64.c | 8
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/net
Instead of saving and restoring LR before each invocation to
bpf_stf_barrier(), set SEEN_FUNC flag so that we save/restore LR in
prologue/epilogue.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch
In preparation for preserving kernel toc in r2, switch BPF_REG_AX from
r2 to r12. r12 is not used by bpf JIT except during external helper/bpf
calls, or with BPF_NOSPEC. These sequences aren't emitted when
BPF_REG_AX is used for constant blinding and other purposes.
Signed-off-by: Naveen N
Set macros to 1 so that they can be used with __is_defined().
Suggested-by: Christophe Leroy
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/types.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/types.h b/arch/powerpc/include/asm
#ifdef.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index 27ac2fc7670298..44314ee60155e4 100644
--- a/arch/powerpc/net
ff-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 2 +-
arch/powerpc/net/bpf_jit_comp.c | 4 +++-
arch/powerpc/net/bpf_jit_comp32.c | 8 +--
arch/powerpc/net/bpf_jit_comp64.c | 39 ---
4 files changed, 30 insertions(+), 23 deletions(-)
diff --git a
-by: Anton Blanchard
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 30 +-
1 file changed, 13 insertions(+), 17 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index e9fd4694226fe0..bff200723e7282
PPC_BL_ABS() is just doing a relative branch with link. The name
suggests that it is for branching to an absolute address, which is
incorrect. Rename the macro to a more appropriate PPC_BL().
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 6 +++---
arch/powerpc/net
PPC_BPF_[LL|STL] are macros meant for scenarios where we may have to
deal with a non-word aligned offset. Limit their usage to only those
scenarios by converting the rest to just use PPC_BPF_[LD|STD].
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 22
paths.
Reported-by: Jordan Niethe
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 2 ++
arch/powerpc/net/bpf_jit_comp.c | 22 +-
arch/powerpc/net/bpf_jit_comp32.c | 7 +--
arch/powerpc/net/bpf_jit_comp64.c | 7 +--
4 files changed, 33 insertions
All these macros now have a single user. Expand their usage in place.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit64.h | 22 --
arch/powerpc/net/bpf_jit_comp64.c | 21 +++--
2 files changed, 15 insertions(+), 28 deletions(-)
diff --git a
- PPC_EX32() is only used by ppc32 JIT. Move it to bpf_jit_comp32.c
- PPC_LI64() is only valid in ppc64. #ifdef it
- PPC_FUNC_ADDR() is not used anymore. Remove it.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 10 +-
arch/powerpc/net/bpf_jit_comp32.c | 2 ++
2
Use _Rn macros to specify register names to make their usage clear.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 30 +++---
arch/powerpc/net/bpf_jit_comp64.c | 68 +++
2 files changed, 49 insertions(+), 49 deletions(-)
diff --git a
There is no need for a separate header anymore. Move the contents of
bpf_jit64.h into bpf_jit_comp64.c
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit64.h | 69 ---
arch/powerpc/net/bpf_jit_comp64.c | 54 +++-
2 files changed, 53
usage sites]
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 197 +-
1 file changed, 86 insertions(+), 111 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index ac06efa7022379..b4de0c35c8a4ab 100644
Convert bpf_to_ppc() to a macro to help simplify its usage since
codegen_context is available in all places it is used. Adopt it also for
powerpc64 for uniformity and get rid of the global b2p structure.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 11 ++--
arch/powerpc
Christophe Leroy wrote:
Le 07/02/2022 à 08:07, Naveen N. Rao a écrit :
This is an early RFC series that adds support for BPF Trampolines on
powerpc64. Some of the selftests are passing for me, but this needs more
testing and I've likely missed a few things as well. A review of the
patche
Hi Christophe,
Thanks for your work enabling DYNAMIC_FTRACE_WITH_ARGS on powerpc. Sorry
for the late review on this series, but I have a few comments below.
Christophe Leroy wrote:
In order to implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS, change ftrace_caller()
to handle LIVEPATCH the same way a
Christophe Leroy wrote:
Implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS. It accelerates the call
of livepatching.
Also note that powerpc being the last one to convert to
CONFIG_DYNAMIC_FTRACE_WITH_ARGS, it will now be possible to remove
klp_arch_set_pc() on all architectures.
Signed-off-by: Christoph
Christophe Leroy wrote:
Modify function graph tracer to be handled directly by the standard
ftrace caller.
This is made possible as powerpc now supports
CONFIG_DYNAMIC_FTRACE_WITH_ARGS.
This change simplifies the call of function graph ftrace.
Signed-off-by: Christophe Leroy
---
arch/powerpc
Christophe Leroy wrote:
PPC64 mprofile versions and PPC32 are very similar.
Modify PPC64 version so that if can be reused for PPC32.
Signed-off-by: Christophe Leroy
---
.../powerpc/kernel/trace/ftrace_64_mprofile.S | 73 +--
1 file changed, 51 insertions(+), 22 deletions(-)
Michael Ellerman wrote:
Christophe Leroy writes:
Le 14/02/2022 à 16:25, Naveen N. Rao a écrit :
Christophe Leroy wrote:
Implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS. It accelerates the call
of livepatching.
Also note that powerpc being the last one to convert to
CONFIG_DYNAMIC_FTRACE_WITH_ARGS
Christophe Leroy wrote:
+ S390 people
Le 15/02/2022 à 15:28, Christophe Leroy a écrit :
Le 15/02/2022 à 14:36, Naveen N. Rao a écrit :
Michael Ellerman wrote:
Christophe Leroy writes:
Le 14/02/2022 à 16:25, Naveen N. Rao a écrit :
Christophe Leroy wrote:
Implement
Steven Rostedt wrote:
On Tue, 15 Feb 2022 19:06:48 +0530
"Naveen N. Rao" wrote:
As I understand it, the reason ftrace_get_regs() was introduced was to
be able to only return the pt_regs, if _all_ registers were saved into
it, which we don't do when coming in through ftrace_ca
Christophe Leroy wrote:
Add some line breaks to better match the file's style, add
some space after comma and fix a couple of misplaced blanks.
Suggested-by: Naveen N. Rao
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/trace/ftrace_mprofile.S | 12
1 file chang
his information is used in ftrace_cmp_recs() to
reserve instructions from the global entry point.
Suggested-by: Steven Rostedt
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/ftrace.h | 15
arch/powerpc/kernel/trace/ftrace.c | 110 ++---
kernel/trace/ftrac
On some architectures, ftrace location can include multiple
instructions, and does not necessarily match the function entry address
returned by kallsyms_lookup(). Drop the check in is_ftrace_location() to
accommodate the same.
Signed-off-by: Naveen N. Rao
---
kernel/bpf/trampoline.c | 2 --
1
: Naveen N. Rao
---
kernel/kprobes.c | 12
1 file changed, 12 insertions(+)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 94cab8c9ce56cc..0a797ede3fdf37 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1497,6 +1497,10 @@ bool within_kprobe_blacklist(unsigned long addr
Previously discussed here:
https://lore.kernel.org/20220207102454.41b1d...@gandalf.local.home
- Naveen
Naveen N. Rao (3):
powerpc/ftrace: Reserve instructions from function entry for ftrace
bpf/trampoline: Allow ftrace location to differ from trampoline attach
address
kprobes: Allow
Naveen N. Rao wrote:
On certain architectures, ftrace can reserve multiple instructions at
function entry. Rather than rejecting kprobe on addresses other than the
exact ftrace call instruction, use the address returned by ftrace to
probe at the correct address when CONFIG_KPROBES_ON_FTRACE is
On 2017/03/04 09:49AM, Masami Hiramatsu wrote:
> On Thu, 2 Mar 2017 23:25:06 +0530
> "Naveen N. Rao" wrote:
>
> > We indicate support for accepting sym+offset with kretprobes through a
> > line in ftrace README. Parse the same to identify support and choose
On 2017/03/04 01:34PM, Masami Hiramatsu wrote:
> On Sat, 4 Mar 2017 11:35:51 +0900
> Masami Hiramatsu wrote:
>
> > On Sat, 4 Mar 2017 09:49:11 +0900
> > Masami Hiramatsu wrote:
> >
> > > On Thu, 2 Mar 2017 23:25:06 +0530
> > > "Naveen N. R
kernel/debug/kprobes/list
c0041370 k kretprobe_trampoline+0x0[OPTIMIZED]
c04ba0b8 r do_open+0x8[DISABLED]
c0443430 r do_open+0x0[DISABLED]
Signed-off-by: Naveen N. Rao
---
include/linux/kprobes.h | 1 +
kernel/kprob
On 2017/02/08 01:24AM, Naveen N Rao wrote:
> ... as the weak variant will do.
>
> Signed-off-by: Naveen N. Rao
> ---
> arch/arm/probes/kprobes/core.c | 10 --
> arch/arm64/kernel/probes/kprobes.c | 6 --
> 2 files changed, 16 deletions(-)
With the ge
On 2017/03/06 10:06PM, Masami Hiramatsu wrote:
> On Mon, 6 Mar 2017 23:19:09 +0530
> "Naveen N. Rao" wrote:
>
> > Masami,
> > Your patch works, thanks! However, I felt we could refactor and reuse
> > some of the code across kprobes.c for this purpose. Can
]
c04433d0 r do_open+0x0[DISABLED]
c04ba058 r do_open+0x8[DISABLED]
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
tools/perf/util/probe-event.c | 12 +---
tools/perf/util/probe-file.c | 7 +++
tools/perf/util/probe-file.h | 1 +
asami Hiramatsu
Signed-off-by: Naveen N. Rao
---
tools/perf/arch/powerpc/util/sym-handling.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/perf/arch/powerpc/util/sym-handling.c
b/tools/perf/arch/powerpc/util/sym-handling.c
index 1030a6e504bb..e93b3db25012 1
probe-file.c needs libelf, but scanning ftrace README does not require
that. As such, move the ftrace README scanning logic out of probe-file.c
and into trace-event-parse.c.
Signed-off-by: Naveen N. Rao
---
tools/perf/util/probe-file.c| 87 +++-
tools
With ABIv2, we offset 8 bytes into a function to get at the local entry
point.
Acked-by: Ananth N Mavinakayanahalli
Acked-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel
Signed-off-by: Naveen N. Rao
---
tools/perf/util/probe-file.c | 70 +++-
1 file changed, 37 insertions(+), 33 deletions(-)
diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
index 1a62daceb028..8a219cd831b7 100644
--- a/tools/perf
nux/tools/perf$ sudo cat /sys/kernel/debug/kprobes/list
c0041370 k kretprobe_trampoline+0x0[OPTIMIZED]
c04ba0b8 r do_open+0x8[DISABLED]
c0443430 r do_open+0x0[DISABLED]
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
include/linu
On 2017/03/07 03:03PM, Masami Hiramatsu wrote:
> On Tue, 7 Mar 2017 16:17:40 +0530
> "Naveen N. Rao" wrote:
>
> > probe-file.c needs libelf, but scanning ftrace README does not require
> > that. As such, move the ftrace README scanning logic out of probe-file.c
&
On 2017/03/07 04:51PM, Masami Hiramatsu wrote:
> On Tue, 7 Mar 2017 16:17:40 +0530
> "Naveen N. Rao" wrote:
>
> > probe-file.c needs libelf, but scanning ftrace README does not require
> > that. As such, move the ftrace README scanning logic out of probe-file.c
&
aveen N. Rao
---
Changes:
- updated to address build issues due to dropping patch 5/6.
tools/perf/arch/powerpc/util/sym-handling.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/tools/perf/arch/powerpc/util/sym-handling.c
b/tools/perf/arch/powerpc/uti
_mem
naveen@ubuntu:~/linux/tools/perf$ sudo cat /sys/kernel/debug/kprobes/list
c05f3b48 k read_mem+0x8[DISABLED]
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c| 56 +--
arch/powerpc/lib/code-patching.c | 4 +-
arch/powerpc/lib/
On 2017/03/07 03:47PM, Steven Rostedt wrote:
>
> Please start a new thread. When sending patches as replies to other
> patch threads, especially this deep into the thread, they will most
> likely get ignored.
Sorry, got carried off. I will re-post in a new series.
- Naveen
linux-kernel@vger.kernel.org/msg1347013.html
--
Naveen N. Rao (5):
trace/kprobes: fix check for kretprobe offset within function entry
powerpc: kretprobes: override default function entry offset
perf: probe: factor out the ftrace README scanning
perf: kretprobes: offset from reloc_sym if k
nux/tools/perf$ sudo cat /sys/kernel/debug/kprobes/list
c0041370 k kretprobe_trampoline+0x0[OPTIMIZED]
c04ba0b8 r do_open+0x8[DISABLED]
c0443430 r do_open+0x0[DISABLED]
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
include/linu
With ABIv2, we offset 8 bytes into a function to get at the local entry
point.
Acked-by: Ananth N Mavinakayanahalli
Acked-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel
Signed-off-by: Naveen N. Rao
---
tools/perf/util/probe-file.c | 70 +++-
1 file changed, 37 insertions(+), 33 deletions(-)
diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
index 1a62daceb028..8a219cd831b7 100644
--- a/tools/perf
aveen N. Rao
---
tools/perf/arch/powerpc/util/sym-handling.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/tools/perf/arch/powerpc/util/sym-handling.c
b/tools/perf/arch/powerpc/util/sym-handling.c
index 1030a6e504bb..39dbe512b9fc 100644
--- a/tools/perf
]
c04433d0 r do_open+0x0[DISABLED]
c04ba058 r do_open+0x8[DISABLED]
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
tools/perf/util/probe-event.c | 12 +---
tools/perf/util/probe-file.c | 7 +++
tools/perf/util/probe-file.h | 1 +
On 2017/03/08 09:42AM, Masami Hiramatsu wrote:
> On Wed, 8 Mar 2017 02:09:29 +0530
> "Naveen N. Rao" wrote:
>
> > Along similar lines as commit 9326638cbee2 ("kprobes, x86: Use
> > NOKPROBE_SYMBOL() instead of __kprobes annotation"), convert __kprobes
On 2017/03/08 11:31AM, Masami Hiramatsu wrote:
> On Wed, 8 Mar 2017 13:56:10 +0530
> "Naveen N. Rao" wrote:
>
> > perf now uses an offset from _text/_stext for kretprobes if the kernel
> > supports it, rather than the actual function name. As such, let's cho
Hi Michael,
On 2017/03/08 09:43PM, Michael Ellerman wrote:
> "Naveen N. Rao" writes:
>
> > With ABIv2, we offset 8 bytes into a function to get at the local entry
> > point.
> >
> > Acked-by: Ananth N Mavinakayanahalli
> > Acked-by: Mich
On 2017/03/08 11:29AM, Arnaldo Carvalho de Melo wrote:
> Em Wed, Mar 08, 2017 at 07:54:12PM +0530, Naveen N. Rao escreveu:
> > Hi Michael,
> >
> > On 2017/03/08 09:43PM, Michael Ellerman wrote:
> > > "Naveen N. Rao" writes:
> > >
> > >
On 2017/03/09 05:37PM, Michael Ellerman wrote:
> "Naveen N. Rao" writes:
> > On 2017/03/08 11:29AM, Arnaldo Carvalho de Melo wrote:
> >> > I wasn't sure if you were planning on picking up KPROBES_ON_FTRACE for
> >> > v4.11. If so, it woul
On 2017/03/02 08:38PM, Michael Ellerman wrote:
> Steven Rostedt writes:
>
> > On Tue, 28 Feb 2017 15:04:15 +1100
> > Michael Ellerman wrote:
> >
> > kernel/trace/ftrace.c more obvious.
> >>
> >> I don't know if it's really worth keeping the names the same across
> >> arches, especially as we al
On 2017/03/10 10:45AM, Steven Rostedt wrote:
> On Thu, 02 Mar 2017 20:38:53 +1100
> Michael Ellerman wrote:
>
> > Steven Rostedt writes:
> >
> > > On Tue, 28 Feb 2017 15:04:15 +1100
> > > Michael Ellerman wrote:
> > >
> > > kernel/trace/ftrace.c more obvious.
> > >>
> > >> I don't know if i
On 2017/03/10 11:54AM, Steven Rostedt wrote:
> On Fri, 10 Mar 2017 21:38:53 +0530
> "Naveen N. Rao" wrote:
>
> > On 2017/03/10 10:45AM, Steven Rostedt wrote:
> > > On Thu, 02 Mar 2017 20:38:53 +1100
> > > Michael Ellerman wrote:
>
> > >
On 2017/03/14 10:18AM, Arnaldo Carvalho de Melo wrote:
> Em Thu, Mar 09, 2017 at 05:37:38PM +1100, Michael Ellerman escreveu:
> > "Naveen N. Rao" writes:
> > > On 2017/03/08 11:29AM, Arnaldo Carvalho de Melo wrote:
> > >> > I wasn't sure if you w
On 2017/03/15 09:11AM, Steven Rostedt wrote:
> On Wed, 15 Mar 2017 14:35:16 +0530
> "Naveen N. Rao" wrote:
>
> > I don't have a strong opinion about this, but I feel that x86 can simply
> > use ftrace_64.S, seeing as the current name is mcount_64.S.
&
h one
other user in bpf. It is obviously non-critical, but given that we have
64K pages on powerpc64, it does help to speed up the BPF JIT.
- Naveen
Naveen N. Rao (2):
powerpc: string: implement optimized memset variants
powerpc: bpf: use memset32() to pre-fill traps in BPF page(s)
arch/powerpc/
701 - 800 of 1372 matches
Mail list logo