We aren't handling subtraction involving an immediate value of
0x8000 properly. Fix the same.
Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended
BPF")
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 16
1
y introduced in ISA v2.06. Guard use of
the same and implement an alternative approach for older processors.
Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended
BPF")
Reported-by: Johan Almbladh
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/pp
s based on current settings,
just like in x86. Due to this, we don't need to take any action if
mitigations are enabled or disabled at runtime.
Signed-off-by: Naveen N. Rao
---
Thanks to Daniel Borkmann and Nick Piggin for their help in putting
together this patch!
arch/powerpc/net/bpf_ji
Add a helper to return the stf_barrier type for the current processor.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/security_features.h | 5 +
arch/powerpc/kernel/security.c | 5 +
2 files changed, 10 insertions(+)
diff --git a/arch/powerpc/include/asm
Daniel Borkmann wrote:
On 9/29/21 1:18 PM, Hari Bathini wrote:
Patch #1 & #2 are simple cleanup patches. Patch #3 refactors JIT
compiler code with the aim to simplify adding BPF_PROBE_MEM support.
Patch #4 introduces PPC_RAW_BRANCH() macro instead of open coding
branch instruction. Patch #5 & #7
Hi Song,
Thanks for the reviews.
Song Liu wrote:
On Fri, Oct 1, 2021 at 2:16 PM Naveen N. Rao
wrote:
Add a helper to check if a given offset is within the branch range for a
powerpc conditional branch instruction, and update some sites to use the
new helper.
Signed-off-by: Naveen N. Rao
Hi Christophe,
Thanks for the reviews.
Christophe Leroy wrote:
Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
Add a helper to check if a given offset is within the branch range for a
powerpc conditional branch instruction, and update some sites to use the
new helper.
Signed-off-by: Naveen
Christophe Leroy wrote:
Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
Add checks to ensure that we never emit branch instructions with
truncated branch offsets.
Suggested-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 26
Christophe Leroy wrote:
Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
From: Ravi Bangoria
SEEN_STACK is unused on PowerPC. Remove it. Also, have
SEEN_TAILCALL use 0x4000.
Why change SEEN_TAILCALL ? Would it be a problem to leave it as is ?
Signed-off-by: Ravi Bangoria
Reviewed-by
Christophe Leroy wrote:
Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
We aren't handling subtraction involving an immediate value of
0x8000 properly. Fix the same.
Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended
BPF")
Signed-off-by
Hi Johan,
Johan Almbladh wrote:
On Fri, Oct 1, 2021 at 11:15 PM Naveen N. Rao
wrote:
Various fixes to the eBPF JIT for powerpc, thanks to some new tests
added by Johan. This series fixes all failures in test_bpf on powerpc64.
There are still some failures on powerpc32 to be looked into
Christophe Leroy wrote:
Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
In some scenarios, it is possible that the program epilogue is outside
the branch range for a BPF_EXIT instruction. Instead of rejecting such
programs, emit an indirect branch. We track the size of the bpf program
emitted
Christophe Leroy wrote:
Le 04/10/2021 à 20:11, Naveen N. Rao a écrit :
Christophe Leroy wrote:
Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
From: Ravi Bangoria
SEEN_STACK is unused on PowerPC. Remove it. Also, have
SEEN_TAILCALL use 0x4000.
Why change SEEN_TAILCALL ? Would it be
fix issues in ppc32.
- Naveen
Naveen N. Rao (10):
powerpc/lib: Add helper to check if offset is within conditional
branch range
powerpc/bpf: Validate branch ranges
powerpc/bpf: Fix BPF_MOD when imm == 1
powerpc/bpf: Fix BPF_SUB when imm == 0x8000
powerpc/security: Add a helper
Add a helper to check if a given offset is within the branch range for a
powerpc conditional branch instruction, and update some sites to use the
new helper.
Acked-by: Song Liu
Signed-off-by: Naveen N. Rao
---
Changelog:
- Change 0x7FFF to 0x7fff, per Christophe
arch/powerpc/include/asm/code
Add checks to ensure that we never emit branch instructions with
truncated branch offsets.
Acked-by: Song Liu
Acked-by: Johan Almbladh
Tested-by: Johan Almbladh
Suggested-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h| 26
Only ignore the operation if dividing by 1.
Acked-by: Song Liu
Acked-by: Johan Almbladh
Tested-by: Johan Almbladh
Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended
BPF")
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp64.c | 10
We aren't handling subtraction involving an immediate value of
0x8000 properly. Fix the same.
Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended
BPF")
Signed-off-by: Naveen N. Rao
---
Changelog:
- Split up BPF_ADD and BPF_SUB cases per Christophe
Add a helper to return the stf_barrier type for the current processor.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/security_features.h | 5 +
arch/powerpc/kernel/security.c | 5 +
2 files changed, 10 insertions(+)
diff --git a/arch/powerpc/include/asm
s based on current settings,
just like in x86. Due to this, we don't need to take any action if
mitigations are enabled or disabled at runtime.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit64.h | 8 ++---
arch/powerpc/net/bpf_jit_comp64.c | 55 ---
Correct the destination register used for ALU32 BPF_ARSH operation.
Fixes: 51c66ad849a703 ("powerpc/bpf: Implement extended BPF on PPC32")
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/p
'andi' only takes an unsigned 16-bit value. Correct the imm range used
when emitting andi.
Fixes: 51c66ad849a703 ("powerpc/bpf: Implement extended BPF on PPC32")
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 2 +-
1 file changed, 1 insertion(+), 1 del
Special case handling of the smallest 32-bit negative number for BPF_SUB.
Fixes: 51c66ad849a703 ("powerpc/bpf: Implement extended BPF on PPC32")
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ar
Suppress emitting zero extend instruction for 64-bit BPF_END_FROM_[L|B]E
operation.
Fixes: 51c66ad849a703 ("powerpc/bpf: Implement extended BPF on PPC32")
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --
Christophe Leroy wrote:
Le 05/10/2021 à 22:25, Naveen N. Rao a écrit :
We aren't handling subtraction involving an immediate value of
0x8000 properly. Fix the same.
Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended
BPF")
Signed-off-by
+-
1 file changed, 1 insertion(+), 1 deletion(-)
Thanks for the fix!
Reviewed-by: Naveen N. Rao
Michael Ellerman wrote:
Daniel Borkmann writes:
On 10/25/21 8:15 AM, Naveen N. Rao wrote:
Hari Bathini wrote:
Running program with bpf-to-bpf function calls results in data access
exception (0x300) with the below call trace:
[c0113f28] bpf_int_jit_compile+0x238/0x750 (unreliable
Hi Christophe,
Christophe Leroy wrote:
Hi Naveen,
Few years ago, you implemented eBPF on PPC64.
Is there any reason for implementing it for PPC64 only ?
I focused on ppc64 since eBPF is a 64-bit VM and it was more
straight-forward to target.
Is there something that makes it impossible to
Christophe Leroy wrote:
Le 24/11/2020 à 17:35, Naveen N. Rao a écrit :
Hi Christophe,
Christophe Leroy wrote:
Hi Naveen,
Few years ago, you implemented eBPF on PPC64.
Is there any reason for implementing it for PPC64 only ?
I focused on ppc64 since eBPF is a 64-bit VM and it was more
upstream issue since I am able to reproduce the lockup without these
patches. I will be looking into that to see if I can figure out the
cause of those lockups.
In the meantime, I would appreciate a review of these patches.
- Naveen
Naveen N. Rao (14):
ftrace: Fix updating FTRACE_FL_TRAMP
ect module is going away. This
happens because we are checking if any ftrace_ops has the
FTRACE_FL_TRAMP flag set _before_ updating the filter hash.
The fix for this is to look for any _other_ ftrace_ops that also needs
FTRACE_FL_TRAMP.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c |
DYNAMIC_FTRACE_WITH_DIRECT_CALLS should depend on
DYNAMIC_FTRACE_WITH_REGS since we need ftrace_regs_caller().
Fixes: 763e34e74bb7d5c ("ftrace: Add register_ftrace_direct()")
Signed-off-by: Naveen N. Rao
---
kernel/trace/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
We need to remove hash entry if register_ftrace_function() fails.
Consolidate the cleanup to be done after register_ftrace_function() at
the end.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/trace/ftrace.c b
t
capture all trampolines.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 ---
kernel/trace/ftrace.c | 84 ++
2 files changed, 4 insertions(+), 85 deletions(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 1bd3a0356ae4
Architectures may want to do some validation (such as to ensure that the
trampoline code is reachable from the provided ftrace location) before
accepting ftrace direct registration. Add helpers for the same.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 2 ++
kernel/trace/ftrace.c
Add register_get_kernel_argument() for a rudimentary way to access
kernel function arguments.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ptrace.h | 31 +++
2 files changed, 32 insertions(+)
diff --git a/arch
ftrace_plt_tramps[] was intended to speed up skipping plt branches, but
the code wasn't completed. It is also not significantly better than
reading and decoding the instruction. Remove the same.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 8
1 file chang
Use FTRACE_REGS_ADDR instead of keying off
CONFIG_DYNAMIC_FTRACE_WITH_REGS to identify the proper ftrace trampoline
address to use.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc
, this is not required. Drop it.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index bbe871b47ade58..c5602e9b07faa3
We currently assume that ftrace locations are patched to go to either
ftrace_caller or ftrace_regs_caller. Drop this assumption in preparation
for supporting ftrace direct calls.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 107 +++--
1 file
.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ftrace.h | 14 ++
arch/powerpc/kernel/trace/ftrace.c| 140 +-
.../powerpc/kernel/trace/ftrace_64_mprofile.S | 40 -
4 files changed, 182
text.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 75 +-
1 file changed, 33 insertions(+), 42 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index 14b39f7797d455..7ddb6e4b527c39 100644
--- a
, and it isn't evident that the graph caller has too
deep a call stack to cause issues.
Signed-off-by: Naveen N. Rao
---
.../powerpc/kernel/trace/ftrace_64_mprofile.S | 28 +--
1 file changed, 7 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/
Add a simple powerpc trampoline to demonstrate use of ftrace direct on
powerpc.
Signed-off-by: Naveen N. Rao
---
samples/Kconfig | 2 +-
samples/ftrace/ftrace-direct-modify.c | 58 +++
samples/ftrace/ftrace-direct-too.c| 48
Steven Rostedt wrote:
On Thu, 26 Nov 2020 23:38:38 +0530
"Naveen N. Rao" wrote:
On powerpc, kprobe-direct.tc triggered FTRACE_WARN_ON() in
ftrace_get_addr_new() followed by the below message:
Bad trampoline accounting at: 4222522f (wake_up_process+0xc/0x20)
(f001)
On 2021/01/20 03:16PM, Ananth N Mavinakayanahalli wrote:
> We currently unconditionally try to newer emulate instructions on older
^ never?
Or: "emulate newer"?
> Power versions that could cause issues. Gate it.
>
> S
s on scv and addpcis
> [v2] Fixed description
>
> arch/powerpc/lib/sstep.c | 46
> --
> 1 file changed, 44 insertions(+), 2 deletions(-)
Reviewed-by: Naveen N. Rao
- Naveen
ed, we are returning 'should not be
> single-stepped' while we should have returned 0 which says
> 'did not emulate, may have to single-step'.
>
> Signed-off-by: Ananth N Mavinakayanahalli
> Tested-by: Naveen N. Rao
> ---
> arch/powerpc/lib/sstep.c | 4
On 2021/01/15 11:46AM, Ravi Bangoria wrote:
> Compiling kernel with -Warray-bounds throws below warning:
>
> In function 'emulate_vsx_store':
> warning: array subscript is above array bounds [-Warray-bounds]
> buf.d[2] = byterev_8(reg->d[1]);
> ~^~~
> buf.d[3] = byterev_8(reg->d[0]);
tead of pointer
> in the same code block.
>
> Fixes: af99da74333b ("powerpc/sstep: Support VSX vector paired storage access
> instructions")
> Suggested-by: Naveen N. Rao
> Signed-off-by: Ravi Bangoria
> ---
> v1:
> http://lore.kernel.org/r/20210115061620.692500
On 2021/02/03 12:08PM, Sandipan Das wrote:
> The Power ISA says that the fixed-point load and update
> instructions must neither use R0 for the base address (RA)
> nor have the destination (RT) and the base address (RA) as
> the same register. In these cases, the instruction is
> invalid. This appl
Hi Jordan,
On 2021/02/04 10:59AM, Jordan Niethe wrote:
> When adding a pte a ptesync is needed to order the update of the pte
> with subsequent accesses otherwise a spurious fault may be raised.
>
> radix__set_pte_at() does not do this for performance gains. For
> non-kernel memory this is not an
On 2021/02/04 12:44PM, Sandipan Das wrote:
> The Power ISA says that the fixed-point load and update
> instructions must neither use R0 for the base address (RA)
> nor have the destination (RT) and the base address (RA) as
> the same register. Similarly, for fixed-point stores and
> floating-point
On 2021/02/03 03:17PM, Segher Boessenkool wrote:
> On Wed, Feb 03, 2021 at 03:19:09PM +0530, Naveen N. Rao wrote:
> > On 2021/02/03 12:08PM, Sandipan Das wrote:
> > > The Power ISA says that the fixed-point load and update
> > > instructions must neither use R0 for the
t;
> Changes in v4:
> - Fixed grammar and switch-case alignment.
>
> Changes in v3:
> - Consolidated the checks as suggested by Naveen.
> - Consolidated load/store changes into a single patch.
> - Included floating-point load/store and update instructions.
>
> Changes in v2:
> - Jump to unknown_opcode instead of returning -1 for invalid
> instruction forms.
>
> ---
> arch/powerpc/lib/sstep.c | 14 ++
> 1 file changed, 14 insertions(+)
For the series:
Reviewed-by: Naveen N. Rao
- Naveen
Christophe Leroy wrote:
From: Naveen N. Rao
Trying to use a kprobe on ppc32 results in the below splat:
BUG: Unable to handle kernel data access on read at 0x7c0802a6
Faulting instruction address: 0xc002e9f0
Oops: Kernel access of bad area, sig: 11 [#1]
BE PAGE_SIZE=4K PowerPC
The first patch fixes an issue that causes a soft lockup on ppc64 with
the BPF_ATOMIC bounds propagation verifier test. The second one updates
ppc32 JIT to reject atomic operations properly.
- Naveen
Naveen N. Rao (2):
powerpc/bpf: Fix detecting BPF atomic instructions
powerpc/bpf: Reject
mic
bounds test. Fix this by looking at the correct immediate value.
Fixes: 91c960b0056672 ("bpf: Rename BPF_XADD and prepare to encode other
atomics in .imm")
Reported-by: Jiri Olsa
Tested-by: Jiri Olsa
Signed-off-by: Naveen N. Rao
---
Hi Jiri,
FYI: I made a small change in this patc
d
the same time and didn't include the same change. Update the ppc32 JIT
accordingly.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp32.c
b/arch/powerpc/
Alexei Starovoitov wrote:
On Thu, Jul 1, 2021 at 8:09 AM Naveen N. Rao
wrote:
Commit 91c960b0056672 ("bpf: Rename BPF_XADD and prepare to encode other
atomics in .imm") converted BPF_XADD to BPF_ATOMIC and added a way to
distinguish instructions based on the immediate field. Ex
Christophe Leroy wrote:
Le 01/07/2021 à 17:08, Naveen N. Rao a écrit :
Commit 91c960b0056672 ("bpf: Rename BPF_XADD and prepare to encode other
atomics in .imm") converted BPF_XADD to BPF_ATOMIC and updated all JIT
implementations to reject JIT'ing instructions with an
Introduce macros to encode the DTL enable mask fields and use those
instead of hardcoding numbers.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/lppaca.h | 11 +++
arch/powerpc/platforms/pseries/dtl.c | 8 +---
arch/powerpc/platforms/pseries/lpar.c | 2 +-
arch
there have been 6821 dispatches in the vcpu home node,
while 18 dispatches were in a different chip.
TODO:
- Consider need for adding cond_resched() in some places.
- More testing, especially on larger machines.
- Naveen
Naveen N. Rao (6):
powerpc/pseries: Use macros for referring to the DTL
x27;t need to save and restore the earlier mask value if
CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is not enabled. So, remove the field
from the structure as well.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/platforms/pseries/dtl.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a
Introduce new helpers for DTL buffer allocation and registration and
have the existing code use those.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 +
arch/powerpc/platforms/pseries/lpar.c | 66 ---
arch/powerpc/platforms/pseries
: Naveen N. Rao
---
arch/powerpc/mm/book3s64/vphn.h | 8
arch/powerpc/mm/numa.c | 27 +--
2 files changed, 21 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/vphn.h b/arch/powerpc/mm/book3s64/vphn.h
index f0b93c2dd578..f7ff1e0c3801
/accessing DTLB for all online cpus. These
helpers allow any number of per-cpu users, or a single global user
exclusively.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 ++
arch/powerpc/platforms/pseries/dtl.c | 10 ++-
arch/powerpc/platforms/pseries
/vcpudispatch_stats_freq.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/topology.h | 4 +
arch/powerpc/mm/numa.c| 112 +++
arch/powerpc/platforms/pseries/lpar.c | 445 +-
3 files changed, 559 insertions(+), 2 deletions(-)
diff --git a/arc
ch to ensure we don't take too much time while
enabling/disabling statistics on large systems with heavy workload.
- Patch 8/8: new patch adding a document describing the fields in the
procfs file.
- Naveen
Naveen N. Rao (8):
powerpc/pseries: Use macros for referring to the DTL e
Introduce macros to encode the DTL enable mask fields and use those
instead of hardcoding numbers.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/lppaca.h | 11 +++
arch/powerpc/platforms/pseries/dtl.c | 8 +---
arch/powerpc/platforms/pseries/lpar.c | 2 +-
arch
Introduce new helpers for DTL buffer allocation and registration and
have the existing code use those.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 +
arch/powerpc/platforms/pseries/lpar.c | 66 ---
arch/powerpc/platforms/pseries
x27;t need to save and restore the earlier mask value if
CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is not enabled. So, remove the field
from the structure as well.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/platforms/pseries/dtl.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a
d-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 +-
arch/powerpc/platforms/pseries/lpar.c | 29 ---
arch/powerpc/platforms/pseries/setup.c| 2 +-
3 files changed, 22 insertions(+), 11 deletions(-)
diff --git a/arch/powerpc/includ
/accessing DTLB for all online cpus. These
helpers allow any number of per-cpu users, or a single global user
exclusively.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/plpar_wrappers.h | 2 ++
arch/powerpc/platforms/pseries/dtl.c | 10 ++-
arch/powerpc/platforms/pseries
: Naveen N. Rao
---
arch/powerpc/mm/book3s64/vphn.h | 8
arch/powerpc/mm/numa.c | 27 +--
2 files changed, 21 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/vphn.h b/arch/powerpc/mm/book3s64/vphn.h
index f0b93c2dd578..f7ff1e0c3801
/vcpudispatch_stats_freq.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/topology.h | 4 +
arch/powerpc/mm/numa.c| 107 +++
arch/powerpc/platforms/pseries/lpar.c | 441 +-
3 files changed, 550 insertions(+), 2 deletions(-)
diff --git a/arc
Add a document describing the fields provided by
/proc/powerpc/vcpudispatch_stats.
Signed-off-by: Naveen N. Rao
---
Documentation/powerpc/vcpudispatch_stats.txt | 68
1 file changed, 68 insertions(+)
create mode 100644 Documentation/powerpc/vcpudispatch_stats.txt
diff
Michael Ellerman wrote:
Nicholas Piggin writes:
The new mprofile-kernel mcount sequence is
mflr r0
bl_mcount
Dynamic ftrace patches the branch instruction with a noop, but leaves
the mflr. mflr is executed by the branch unit that can only execute one
per cycle on POWER9 and shared wi
Michael Ellerman wrote:
"Naveen N. Rao" writes:
Michael Ellerman wrote:
Nicholas Piggin writes:
The new mprofile-kernel mcount sequence is
mflr r0
bl_mcount
Dynamic ftrace patches the branch instruction with a noop, but leaves
the mflr. mflr is executed by the branch
Nicholas Piggin wrote:
Naveen N. Rao's on May 14, 2019 6:32 pm:
Michael Ellerman wrote:
"Naveen N. Rao" writes:
Michael Ellerman wrote:
Nicholas Piggin writes:
The new mprofile-kernel mcount sequence is
mflr r0
bl_mcount
Dynamic ftrace patches the branch instruct
Nicholas Piggin wrote:
Naveen N. Rao's on May 17, 2019 4:22 am:
While enabling ftrace, we will first need to patch the preceding 'mflr
r0' (which would now be a 'nop') with 'b +8', then use
synchronize_rcu_tasks() and finally patch in 'bl _mcount()' followed by
'mflr r0'.
I think that's wha
h 2 is a fix for x86, but has not
been tested. Patch 4 implements the changes for powerpc64.
- Naveen
Naveen N. Rao (4):
ftrace: Expose flags used for ftrace_replace_code()
x86/ftrace: Fix use of flags in ftrace_replace_code()
ftrace: Expose __ftrace_replace_code()
powerpc/ftrace:
Since ftrace_replace_code() is a __weak function and can be overridden,
we need to expose the flags that can be set. So, move the flags enum to
the header file.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 +
kernel/trace/ftrace.c | 5 -
2 files changed, 5 insertions
While over-riding ftrace_replace_code(), we still want to reuse the
existing __ftrace_replace_code() function. Rename the function and
make it available for other kernel code.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 8
2 files changed, 5
existing
threads make progress, and then patch in the branch to _mcount(). We
override ftrace_replace_code() with a powerpc64 variant for this
purpose.
Signed-off-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 188 +
1 file chang
: a0572f687fb3c ("ftrace: Allow ftrace_replace_code() to be schedulable")
Signed-off-by: Naveen N. Rao
---
I haven't yet tested this patch on x86, but this looked wrong so sending
this as a RFC.
- Naveen
arch/x86/kernel/ftrace.c | 3 ++-
1 file changed, 2 insertions(+), 1 dele
teps: patch in the
mflr instruction, use synchronize_rcu_tasks() to ensure all existing
threads make progress, and then patch in the branch to _mcount(). We
override ftrace_replace_code() with a powerpc64 variant for this
purpose.
Signed-off-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
Nice! Thanks fo
Hi Steven,
Steven Rostedt wrote:
On Mon, 20 May 2019 09:13:20 -0400
Steven Rostedt wrote:
> I haven't yet tested this patch on x86, but this looked wrong so sending
> this as a RFC.
This code has been through a bit of updates, and I need to go through
and clean it up. I'll have to take a
370802 ("powerpc, hw_breakpoints: Implement hw_breakpoints for 64-bit
server processors")
Reviewed-by: Naveen N. Rao
- Naveen
Paul Clarke wrote:
What are the circumstances in which raw_syscalls:sys_exit reports "-1" for the
syscall ID?
perf 5375 [007] 59632.478528: raw_syscalls:sys_enter: NR 1 (3, 9fb888,
8, 2d83740, 1, 7)
perf 5375 [007] 59632.478532:raw_syscalls:sys_exit: NR 1 = 8
perf 5375
The first patch updates DIV64 overflow tests to properly detect error
conditions. The second patch fixes powerpc64 JIT to generate the proper
unsigned division instruction for BPF_ALU64.
- Naveen
Naveen N. Rao (2):
bpf: fix div64 overflow tests to properly detect errors
powerpc/bpf: use
If the result of the division is LLONG_MIN, current tests do not detect
the error since the return value is truncated to a 32-bit value and ends
up being 0.
Signed-off-by: Naveen N. Rao
---
.../testing/selftests/bpf/verifier/div_overflow.c | 14 ++
1 file changed, 10 insertions
ned-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/net/bpf_jit.h| 2 +-
arch/powerpc/net/bpf_jit_comp64.c | 8
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h
b/arch/powerpc/inclu
Hi Jiri,
Jiri Olsa wrote:
hi,
when running 'test_progs -t for_each' on powerpc we are getting
the fault below
This looks to be the same issue reported by Yauheni:
http://lkml.kernel.org/r/xunylf0o872l@redhat.com
Can you please check if the patch I posted there fixes it for you?
Thanks,
function calls are handled in ppc64.
Patches 7 and 8 were previously posted, and while patch 7 has no
changes, patch 8 has been reworked to handle BPF_EXIT differently.
- Naveen
Naveen N. Rao (13):
bpf: Guard against accessing NULL pt_regs in bpf_get_task_stack()
powerpc32/bpf: Fix codegen
oduce helper bpf_get_task_stack()")
Cc: sta...@vger.kernel.org # v5.9+
Signed-off-by: Naveen N. Rao
---
kernel/bpf/stackmap.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 6e75bbee39f0b5..0dcaed4d3f4cec 100644
--- a/
ement extended BPF on PPC32")
Cc: sta...@vger.kernel.org # v5.13+
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp32.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/net/bpf_jit_comp32.c
b/arch/powerpc/net/bpf_jit_comp32.c
index d3a52cd42f5346..997a47fa615
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit_comp.c | 29 +++--
arch/powerpc/net/bpf_jit_comp32.c | 6 ++
arch/powerpc/net/bpf_jit_comp64.c | 7 ++-
3 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/ar
:8:
note: previous definition is here
struct event {
^
This happens since 'struct event' is defined in
drivers/net/ethernet/alteon/acenic.h . Rename the one in runqslower to a
more appropriate 'runq_event' to avoid the naming conflict.
Signed-off-by: Naveen N.
pass after addrs[] is setup properly.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/net/bpf_jit.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index b20a2a83a6e75b..9cdd33d6be4cc0 100644
--- a/arch/powerpc/net
601 - 700 of 1372 matches
Mail list logo