Alexei Starovoitov writes:
> On Tue, Jul 16, 2019 at 09:50:25AM +0100, Jiong Wang wrote:
>>
>> Let me digest a little bit and do some coding, then I will come back. Some
>> issues can only shown up during in-depth coding. I kind of feel handling
>> aux reference in
Andrii Nakryiko writes:
> On Mon, Jul 15, 2019 at 2:21 AM Jiong Wang wrote:
>>
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Jul 11, 2019 at 4:22 AM Jiong Wang
>> > wrote:
>> >>
>> >>
>> >> Andrii Nakryik
Andrii Nakryiko writes:
> On Mon, Jul 15, 2019 at 3:02 AM Jiong Wang wrote:
>>
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Jul 11, 2019 at 5:20 AM Jiong Wang
>> > wrote:
>> >>
>> >>
>> >> Jiong Wang writes:
&
Andrii Nakryiko writes:
> On Thu, Jul 11, 2019 at 5:20 AM Jiong Wang wrote:
>>
>>
>> Jiong Wang writes:
>>
>> > Andrii Nakryiko writes:
>> >
>> >> On Thu, Jul 4, 2019 at 2:32 PM Jiong Wang
>> >> wrote:
>>
Andrii Nakryiko writes:
> On Thu, Jul 11, 2019 at 4:53 AM Jiong Wang wrote:
>>
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Jul 4, 2019 at 2:32 PM Jiong Wang wrote:
>> >>
>> >> This patch introduces list based bpf insn patching in
Andrii Nakryiko writes:
> On Thu, Jul 11, 2019 at 4:22 AM Jiong Wang wrote:
>>
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Jul 4, 2019 at 2:31 PM Jiong Wang wrote:
>> >>
>> >> This is an RFC based on latest bpf-next about accler
Jiong Wang writes:
> Andrii Nakryiko writes:
>
>> On Thu, Jul 4, 2019 at 2:32 PM Jiong Wang wrote:
>>>
>>> Verification layer also needs to handle auxiliar info as well as adjusting
>>> subprog start.
>>>
>>> At this layer, insns inside
Andrii Nakryiko writes:
> On Thu, Jul 4, 2019 at 2:32 PM Jiong Wang wrote:
>>
>> Verification layer also needs to handle auxiliar info as well as adjusting
>> subprog start.
>>
>> At this layer, insns inside patch buffer could be jump, but they should
&g
Andrii Nakryiko writes:
> On Thu, Jul 4, 2019 at 2:32 PM Jiong Wang wrote:
>>
>> This patch introduces list based bpf insn patching infra to bpf core layer
>> which is lower than verification layer.
>>
>> This layer has bpf insn sequence as the solo input, ther
Andrii Nakryiko writes:
> On Thu, Jul 4, 2019 at 2:31 PM Jiong Wang wrote:
>>
>> This is an RFC based on latest bpf-next about acclerating insn patching
>> speed, it is now near the shape of final PATCH set, and we could see the
>> changes migrating to list patchin
This patch migrate convert_ctx_accesses to new list patching
infrastructure. pre-patch is used for generating prologue, because what we
really want to do is insert the prog before prog start without touching
the first insn.
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 98
This patch migrate fixup_bpf_calls to new list patching
infrastructure.
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 94 +++
1 file changed, 49 insertions(+), 45 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
This patch delete all code around old insn patching infrastructure.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 1 -
include/linux/filter.h | 4 -
kernel/bpf/core.c| 169 -
kernel/bpf/verifier.c| 221
This patch migrate 32-bit zero extension insertion to new list patching
infrastructure.
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 45 +
1 file changed, 25 insertions(+), 20 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf
List linerization function will figure out the new jump destination of
patched/blinded jumps. No need of destination adjustment inside
bpf_jit_blind_insn any more.
Signed-off-by: Jiong Wang
---
kernel/bpf/core.c | 76 ++-
1 file changed, 36
27;t touch insns inside
patch buffer.
Adjusting subprog is finished along with adjusting jump target when the
input will cover bpf to bpf call insn, re-register subprog start is cheap.
But adjustment when there is insn deleteion is not considered yet.
Signed-off-by: Jiong Wang
---
kernel/bpf/verifi
This patch migrate dead code remove pass to new list patching
infrastructure.
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 59 +--
1 file changed, 19 insertions(+), 40 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf
e retained, it is
purely insn insertion, so need to use the pre-patch API.
I plan to send out a PATCH set once I finished insn deletion line info adj
support, please have a looks at this RFC, and appreciate feedbacks.
Jiong Wang (8):
bpf: introducing list based insn patching infra to core
.
Suggested-by: Alexei Starovoitov
Suggested-by: Edward Cree
Signed-off-by: Jiong Wang
---
include/linux/filter.h | 25 +
kernel/bpf/core.c | 268 +
2 files changed, 293 insertions(+)
diff --git a/include/linux/filter.h b/include/linux/filter.h
serve
the low 32-bit as signed integer, this is all we want.
Fixes: 2dc6b100f928 ("bpf: interpreter support BPF_ALU | BPF_ARSH")
Reported-by: Yauheni Kaliuta
Reviewed-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
Signed-off-by: Jiong Wang
---
kernel/bpf/core.c | 4 ++--
1 file change
Edward Cree writes:
> On 17/06/2019 21:40, Jiong Wang wrote:
>> Now if we don't split patch when patch an insn inside patch, instead, if we
>> replace the patched insn using what you suggested, then the logic looks to
>> me becomes even more complex, something like
Alexei Starovoitov writes:
> On Mon, Jun 17, 2019 at 1:40 PM Jiong Wang wrote:
>>
>> After digest Alexei and Andrii's reply, I still don't see the need to turn
>> branch target into list, and I am not sure whether pool based list sound
>> good? it sa
Edward Cree writes:
> On 17/06/2019 20:59, Jiong Wang wrote:
>> Edward Cree writes:
>>
>>> On 14/06/2019 16:13, Jiong Wang wrote:
>>>> Just an update and keep people posted.
>>>>
>>>> Working on linked list based approach, the imple
Edward Cree writes:
> On 14/06/2019 16:13, Jiong Wang wrote:
>> Just an update and keep people posted.
>>
>> Working on linked list based approach, the implementation looks like the
>> following, mostly a combine of discussions happened and Naveen's patch
Alexei Starovoitov writes:
> On Wed, Jun 12, 2019 at 8:25 AM Jiong Wang wrote:
>>
>>
>> Jiong Wang writes:
>>
>> > Alexei Starovoitov writes:
>> >
>> >> On Wed, Jun 12, 2019 at 4:32 AM Naveen N. Rao
>> >> wrote:
>> >
Jiong Wang writes:
> Alexei Starovoitov writes:
>
>> On Wed, Jun 12, 2019 at 4:32 AM Naveen N. Rao
>> wrote:
>>>
>>> Currently, for constant blinding, we re-allocate the bpf program to
>>> account for its new size and adjust all branches to accom
Alexei Starovoitov writes:
> On Wed, Jun 12, 2019 at 4:32 AM Naveen N. Rao
> wrote:
>>
>> Currently, for constant blinding, we re-allocate the bpf program to
>> account for its new size and adjust all branches to accommodate the
>> same, for each BPF instruction that needs constant blinding. Th
)
v1:
- Integrated rephrase from Quentin and Jakub
Reviewed-by: Quentin Monnet
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
Documentation/bpf/bpf_design_QA.rst | 30 +-
1 file changed, 25 insertions(+), 5 deletions(-)
diff --git a/Documentation/bpf
Song Liu writes:
> On Thu, May 30, 2019 at 12:46 AM Jiong Wang wrote:
>>
>> There has been quite a few progress around the two steps mentioned in the
>> answer to the following question:
>>
>> Q: BPF 32-bit subregister requirements
>>
>> This patc
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
Documentation/bpf/bpf_design_QA.rst | 30 +-
1 file changed, 25 insertions(+), 5 deletions(-)
diff --git a/Documentation/bpf/bpf_design_QA.rst
b/Documentation/bpf/bpf_design_QA.rst
index cb402c5..5092a2a 100644
-ends failed to
guarantee the mentioned semantics, these unit tests will fail.
Acked-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
Signed-off-by: Jiong Wang
---
tools/testing/selftests/bpf/verifier/subreg.c | 516 +-
1 file changed, 505 insertions(+), 11 deletions
It is better to centralize all sub-register zero extension checks into an
independent file.
This patch takes the first step to move existing sub-register zero
extension checks into subreg.c.
Acked-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
Signed-off-by: Jiong Wang
---
tools/testing
JIT back-ends need to guarantee high 32-bit cleared whenever one eBPF insn
write low 32-bit sub-register only. It is possible that some JIT back-ends
have failed doing this and are silently generating wrong image.
This set completes the unit tests, so bug on this could be exposed.
Jiong Wang (2
Alexei Starovoitov writes:
> On Fri, May 24, 2019 at 11:25:11PM +0100, Jiong Wang wrote:
>> v9:
>> - Split patch 5 in v8.
>> make bpf uapi header file sync a separate patch. (Alexei)
>
> 9th time's a charm? ;)
Yup :), it's all good things and he
insn using this new mov32 variant.
One helper function insn_is_zext is added for checking one insn is an
zero extension on dst. This will be widely used by a few JIT back-ends in
later patches in this set.
Signed-off-by: Jiong Wang
---
include/linux/filter.h | 14 ++
1 file changed
s hardware zero extension support. The peephole
could be as simple as looking the next insn, if it is a special zero
extension insn then it is safe to eliminate it if the current insn has
hardware zero extension support.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf.h
used load APIs to the test version, hence enable high
32-bit randomization for these tests without changing source code.
Besides all these, there are several testcases are using
"bpf_prog_load_attr" directly, their call sites are updated to pass
BPF_F_TEST_RND_HI32.
Signed-off-by: Ji
Cc: Naveen N. Rao
Cc: Sandipan Das
Signed-off-by: Jiong Wang
---
arch/powerpc/net/bpf_jit_comp64.c | 36 +---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index 21a1dcd
This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32
is set.
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 68 ++-
1 file changed, 57 insertions(+), 11 deletions(-)
diff --git a
A new sub-register write overrides the old one.
- When propagating read64 during path pruning, also mark any insn defining
a sub-register that is read in the pruned path as full-register.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 14 +++-
kerne
efficient that 1M call to
it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32
randomization.
Signed-off-by: Jiong Wang
---
tools/testing/selftests/bpf/test_verifier.c | 29 +++--
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/tools
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Signed-off-by: Jiong Wang
---
arch/s390/net/bpf_jit_comp.c | 41 ++---
1 file changed, 34 insertions(+), 7 deletions(-)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 5e7c630..e636728
Patched insns do not go through generic verification, therefore doesn't has
zero extension information collected during insn walking.
We don't bother analyze them at the moment, for any sub-register def comes
from them, just conservatively mark it as needing zero extension.
Signed-off
any
later insn. Such randomization is only enabled under testing mode which is
gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32".
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
include/uapi/linux/bpf.h | 18 ++
kernel/bpf/syscall.c | 4 +
Cc: Wang YanQing
Tested-by: Wang YanQing
Signed-off-by: Jiong Wang
---
arch/x86/net/bpf_jit_comp32.c | 83 +--
1 file changed, 56 insertions(+), 27 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index b29e82f
Cc: David S. Miller
Signed-off-by: Jiong Wang
---
arch/sparc/net/bpf_jit_comp_64.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 65428e7..3364e2a 100644
--- a/arch
ed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 115 +-
drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 +
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++
3 files changed, 81 insertions(+), 48 dele
Cc: Shubham Bansal
Signed-off-by: Jiong Wang
---
arch/arm/net/bpf_jit_32.c | 42 +++---
1 file changed, 31 insertions(+), 11 deletions(-)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index c8bfbbf..97a6b4b 100644
--- a/arch/arm/net
stomizing program loading.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
tools/lib/bpf/bpf.c| 1 +
tools/lib/bpf/bpf.h| 1 +
tools/lib/bpf/libbpf.c | 3 +++
tools/lib/bpf/libbpf.h | 1 +
4 files changed, 6 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index
Cc: Björn Töpel
Acked-by: Björn Töpel
Tested-by: Björn Töpel
Signed-off-by: Jiong Wang
---
arch/riscv/net/bpf_jit_comp.c | 43 ++-
1 file changed, 30 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net
Sync new bpf prog load flag "BPF_F_TEST_RND_HI32" to tools/.
Signed-off-by: Jiong Wang
---
tools/include/uapi/linux/bpf.h | 18 ++
1 file changed, 18 insertions(+)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 68d4470..7c6aef2 10
set enabled, > 25% of those 4460 could be identified as
doesn't needing zero extension on the destination, and the percentage
could go further up to more than 50% with some follow up optimizations
based on the infrastructure offered by this set. This leads to
significant save on JITed
On 24/05/2019 21:43, Alexei Starovoitov wrote:
On Fri, May 24, 2019 at 12:35:15PM +0100, Jiong Wang wrote:
x86_64 and AArch64 perhaps are two arches that running bpf testsuite
frequently, however the zero extension insertion pass is not enabled for
them because of their hardware support.
It is
Björn Töpel writes:
> On Fri, 24 May 2019 at 13:36, Jiong Wang wrote:
>>
>> Cc: Björn Töpel
>> Acked-by: Björn Töpel
>> Tested-by: Björn Töpel
>> Signed-off-by: Jiong Wang
>> ---
>> arch/riscv/net/bpf_jit_comp.c | 43
>> +++
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Signed-off-by: Jiong Wang
---
arch/s390/net/bpf_jit_comp.c | 41 ++---
1 file changed, 34 insertions(+), 7 deletions(-)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 5e7c630..e636728
This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32
is set.
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 68 ++-
1 file changed, 57 insertions(+), 11 deletions(-)
diff --git a
Cc: David S. Miller
Signed-off-by: Jiong Wang
---
arch/sparc/net/bpf_jit_comp_64.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 65428e7..3364e2a 100644
--- a/arch
stomizing program loading.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
tools/lib/bpf/bpf.c| 1 +
tools/lib/bpf/bpf.h| 1 +
tools/lib/bpf/libbpf.c | 3 +++
tools/lib/bpf/libbpf.h | 1 +
4 files changed, 6 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index
used load APIs to the test version, hence enable high
32-bit randomization for these tests without changing source code.
Besides all these, there are several testcases are using
"bpf_prog_load_attr" directly, their call sites are updated to pass
BPF_F_TEST_RND_HI32.
Signed-off-by: Ji
ed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 115 +-
drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 +
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++
3 files changed, 81 insertions(+), 48 dele
Cc: Shubham Bansal
Signed-off-by: Jiong Wang
---
arch/arm/net/bpf_jit_32.c | 42 +++---
1 file changed, 31 insertions(+), 11 deletions(-)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index c8bfbbf..97a6b4b 100644
--- a/arch/arm/net
upport and want verifier insert zero extension explicitly.
Offload targets do not use this native target hook, instead, they could
get the optimization results using bpf_prog_offload_ops.finalize.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf.h| 1 +
include/linu
Cc: Naveen N. Rao
Cc: Sandipan Das
Signed-off-by: Jiong Wang
---
arch/powerpc/net/bpf_jit_comp64.c | 36 +---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index 21a1dcd
any
later insn. Such randomization is only enabled under testing mode which is
gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32".
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
include/uapi/linux/bpf.h | 18 ++
kernel/bpf/syscall.c
efficient that 1M call to
it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32
randomization.
Signed-off-by: Jiong Wang
---
tools/testing/selftests/bpf/test_verifier.c | 29 +++--
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/tools
Cc: Björn Töpel
Acked-by: Björn Töpel
Tested-by: Björn Töpel
Signed-off-by: Jiong Wang
---
arch/riscv/net/bpf_jit_comp.c | 43 ++-
1 file changed, 30 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net
cat dump | grep -P "r.*=.*u32" | wc -l (READ_W)
cat dump | grep -P "r.*=.*u16" | wc -l (READ_H)
cat dump | grep -P "r.*=.*u8" | wc -l (READ_B)
After this patch set enabled, > 25% of those 4460 could be identified as
doesn't needing zero extension
Cc: Wang YanQing
Tested-by: Wang YanQing
Signed-off-by: Jiong Wang
---
arch/x86/net/bpf_jit_comp32.c | 83 +--
1 file changed, 56 insertions(+), 27 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index b29e82f
Patched insns do not go through generic verification, therefore doesn't has
zero extension information collected during insn walking.
We don't bother analyze them at the moment, for any sub-register def comes
from them, just conservatively mark it as needing zero extension.
Signed-off
A new sub-register write overrides the old one.
- When propagating read64 during path pruning, also mark any insn defining
a sub-register that is read in the pruned path as full-register.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 14 +++-
kerne
insn using this new mov32 variant.
One helper function insn_is_zext is added for checking one insn is an
zero extension on dst. This will be widely used by a few JIT back-ends in
later patches in this set.
Signed-off-by: Jiong Wang
---
include/linux/filter.h | 14 ++
1 file changed
Alexei Starovoitov writes:
> well, it made me realize that we're probably doing it wrong,
> since after calling check_reg_arg() we need to re-parse insn encoding.
> How about we change check_reg_arg()'s enum reg_arg_type instead?
This is exactly what I had implemented in my initial internal v
> On 23 May 2019, at 15:02, Daniel Borkmann wrote:
>
> On 05/23/2019 08:38 AM, Y Song wrote:
>> On Wed, May 22, 2019 at 1:46 PM Björn Töpel wrote:
>>> On Wed, 22 May 2019 at 20:13, Y Song wrote:
On Wed, May 22, 2019 at 2:25 AM Björn Töpel wrote:
>
> Add three tests to test_veri
> On 23 May 2019, at 03:07, Alexei Starovoitov
> wrote:
>
> On Wed, May 22, 2019 at 07:54:57PM +0100, Jiong Wang wrote:
>> eBPF ISA specification requires high 32-bit cleared when low 32-bit
>> sub-register is written. This applies to destination register of ALU32 et
any
later insn. Such randomization is only enabled under testing mode which is
gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32".
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
include/uapi/linux/bpf.h | 18 ++
kernel/bpf/syscall.c
efficient that 1M call to
it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32
randomization.
Signed-off-by: Jiong Wang
---
tools/testing/selftests/bpf/test_verifier.c | 29 +++--
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/tools
used load APIs to the test version, hence enable high
32-bit randomization for these tests without changing source code.
Besides all these, there are several testcases are using
"bpf_prog_load_attr" directly, their call sites are updated to pass
BPF_F_TEST_RND_HI32.
Signed-off-by: Ji
Cc: Naveen N. Rao
Cc: Sandipan Das
Signed-off-by: Jiong Wang
---
arch/powerpc/net/bpf_jit_comp64.c | 36 +---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index 21a1dcd
Cc: Shubham Bansal
Signed-off-by: Jiong Wang
---
arch/arm/net/bpf_jit_32.c | 42 +++---
1 file changed, 31 insertions(+), 11 deletions(-)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index c8bfbbf..97a6b4b 100644
--- a/arch/arm/net
Cc: Björn Töpel
Acked-by: Björn Töpel
Tested-by: Björn Töpel
Signed-off-by: Jiong Wang
---
arch/riscv/net/bpf_jit_comp.c | 43 ++-
1 file changed, 30 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net
stomizing program loading.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
tools/lib/bpf/bpf.c| 1 +
tools/lib/bpf/bpf.h| 1 +
tools/lib/bpf/libbpf.c | 3 +++
tools/lib/bpf/libbpf.h | 1 +
4 files changed, 6 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index
Cc: David S. Miller
Signed-off-by: Jiong Wang
---
arch/sparc/net/bpf_jit_comp_64.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 65428e7..3364e2a 100644
--- a/arch
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Signed-off-by: Jiong Wang
---
arch/s390/net/bpf_jit_comp.c | 41 ++---
1 file changed, 34 insertions(+), 7 deletions(-)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 5e7c630..e636728
Cc: Wang YanQing
Tested-by: Wang YanQing
Signed-off-by: Jiong Wang
---
arch/x86/net/bpf_jit_comp32.c | 83 +--
1 file changed, 56 insertions(+), 27 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index b29e82f
ed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 115 +-
drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 +
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++
3 files changed, 81 insertions(+), 48 dele
Patched insns do not go through generic verification, therefore doesn't has
zero extension information collected during insn walking.
We don't bother analyze them at the moment, for any sub-register def comes
from them, just conservatively mark it as needing zero extension.
Signed-off
This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32
is set.
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 68 ++-
1 file changed, 57 insertions(+), 11 deletions(-)
diff --git a
.*u8" | wc -l (READ_B)
After this patch set enabled, more than half of those 4460 could be
identified as doesn't needing zero extension on the destination, this
could lead significant save on JITed image.
Jiong Wang (16):
bpf: verifier: mark verified-insn with sub-register zext f
A new sub-register write overrides the old one.
- When propagating read64 during path pruning, also mark any insn defining
a sub-register that is read in the pruned path as full-register.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 14 +++-
kerne
s hardware zero extension support. The peephole
could be as simple as looking the next insn, if it is a special zero
extension insn then it is safe to eliminate it if the current insn has
hardware zero extension support.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf.h
insn using this new mov32 variant.
One helper function insn_is_zext is added for checking one insn is an
zero extension on dst. This will be widely used by a few JIT back-ends in
later patches in this set.
Signed-off-by: Jiong Wang
---
include/linux/filter.h | 14 ++
1 file changed
Alexei Starovoitov writes:
> On Fri, May 10, 2019 at 09:30:28AM +0100, Jiong Wang wrote:
>>
>> Alexei Starovoitov writes:
>>
>> > On Thu, May 09, 2019 at 01:32:30PM +0100, Jiong Wang wrote:
>> >>
>> >> Alexei Starovoitov writes:
>>
Alexei Starovoitov writes:
> On Thu, May 09, 2019 at 01:32:30PM +0100, Jiong Wang wrote:
>>
>> Alexei Starovoitov writes:
>>
>> > On Wed, May 08, 2019 at 03:45:12PM +0100, Jiong Wang wrote:
>> >>
>> >> I might be misunde
Jiong Wang writes:
> At the moment we have single backend hook "bpf_jit_hardware_zext", once a
> backend enable it, verifier just insert zero extension for all identified
> alu32 and narrow loads.
>
> Given verifier analysis info is not pushed down to JIT back-ends, v
Alexei Starovoitov writes:
> On Wed, May 08, 2019 at 03:45:12PM +0100, Jiong Wang wrote:
>>
>> I might be misunderstanding your points, please just shout if I am wrong.
>>
>> Suppose the following BPF code:
>>
>> unsigned helper(unsigned long long,
Alexei Starovoitov writes:
> On Fri, May 03, 2019 at 11:42:28AM +0100, Jiong Wang wrote:
>> BPF helper call transfers execution from eBPF insns to native functions
>> while verifier insn walker only walks eBPF insns. So, verifier can only
>> knows argument and return valu
Jiong Wang writes:
> Daniel Borkmann writes:
>
>> On 05/03/2019 12:42 PM, Jiong Wang wrote:
>>> BPF helper call transfers execution from eBPF insns to native functions
>>> while verifier insn walker only walks eBPF insns. So, verifier can only
>>> kno
eported-by: Oleksandr Natalenko
Reported-by: Pablo Cascón
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c
b/d
Jiong Wang writes:
> Alexei Starovoitov writes:
>
>> On Fri, May 03, 2019 at 11:42:31AM +0100, Jiong Wang wrote:
>>> This patch introduce new alu32 insn BPF_ZEXT, and allocate the unused
>>> opcode 0xe0 to it.
>>>
>>> Compared with the other
Alexei Starovoitov writes:
> On Fri, May 03, 2019 at 11:42:31AM +0100, Jiong Wang wrote:
>> This patch introduce new alu32 insn BPF_ZEXT, and allocate the unused
>> opcode 0xe0 to it.
>>
>> Compared with the other alu32 insns, zero extension on low 32-bit is the
1 - 100 of 408 matches
Mail list logo