s hardware zero extension support. The peephole
could be as simple as looking the next insn, if it is a special zero
extension insn then it is safe to eliminate it if the current insn has
hardware zero extension support.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf.h
insn using this new mov32 variant.
One helper function insn_is_zext is added for checking one insn is an
zero extension on dst. This will be widely used by a few JIT back-ends in
later patches in this set.
Signed-off-by: Jiong Wang
---
include/linux/filter.h | 14 ++
1 file changed
.*u8" | wc -l (READ_B)
After this patch set enabled, more than half of those 4460 could be
identified as doesn't needing zero extension on the destination, this
could lead significant save on JITed image.
Jiong Wang (16):
bpf: verifier: mark verified-insn with sub-register zext f
A new sub-register write overrides the old one.
- When propagating read64 during path pruning, also mark any insn defining
a sub-register that is read in the pruned path as full-register.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 14 +++-
kerne
This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32
is set.
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 68 ++-
1 file changed, 57 insertions(+), 11 deletions(-)
diff --git a
Cc: Wang YanQing
Tested-by: Wang YanQing
Signed-off-by: Jiong Wang
---
arch/x86/net/bpf_jit_comp32.c | 83 +--
1 file changed, 56 insertions(+), 27 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index b29e82f
ed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 115 +-
drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 +
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++
3 files changed, 81 insertions(+), 48 dele
Patched insns do not go through generic verification, therefore doesn't has
zero extension information collected during insn walking.
We don't bother analyze them at the moment, for any sub-register def comes
from them, just conservatively mark it as needing zero extension.
Signed-off
stomizing program loading.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
tools/lib/bpf/bpf.c| 1 +
tools/lib/bpf/bpf.h| 1 +
tools/lib/bpf/libbpf.c | 3 +++
tools/lib/bpf/libbpf.h | 1 +
4 files changed, 6 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index
Cc: David S. Miller
Signed-off-by: Jiong Wang
---
arch/sparc/net/bpf_jit_comp_64.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 65428e7..3364e2a 100644
--- a/arch
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Signed-off-by: Jiong Wang
---
arch/s390/net/bpf_jit_comp.c | 41 ++---
1 file changed, 34 insertions(+), 7 deletions(-)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 5e7c630..e636728
any
later insn. Such randomization is only enabled under testing mode which is
gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32".
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
include/uapi/linux/bpf.h | 18 ++
kernel/bpf/syscall.c
efficient that 1M call to
it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32
randomization.
Signed-off-by: Jiong Wang
---
tools/testing/selftests/bpf/test_verifier.c | 29 +++--
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/tools
used load APIs to the test version, hence enable high
32-bit randomization for these tests without changing source code.
Besides all these, there are several testcases are using
"bpf_prog_load_attr" directly, their call sites are updated to pass
BPF_F_TEST_RND_HI32.
Signed-off-by: Ji
Cc: Naveen N. Rao
Cc: Sandipan Das
Signed-off-by: Jiong Wang
---
arch/powerpc/net/bpf_jit_comp64.c | 36 +---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index 21a1dcd
Cc: Shubham Bansal
Signed-off-by: Jiong Wang
---
arch/arm/net/bpf_jit_32.c | 42 +++---
1 file changed, 31 insertions(+), 11 deletions(-)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index c8bfbbf..97a6b4b 100644
--- a/arch/arm/net
Cc: Björn Töpel
Acked-by: Björn Töpel
Tested-by: Björn Töpel
Signed-off-by: Jiong Wang
---
arch/riscv/net/bpf_jit_comp.c | 43 ++-
1 file changed, 30 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net
> On 23 May 2019, at 03:07, Alexei Starovoitov
> wrote:
>
> On Wed, May 22, 2019 at 07:54:57PM +0100, Jiong Wang wrote:
>> eBPF ISA specification requires high 32-bit cleared when low 32-bit
>> sub-register is written. This applies to destination register of ALU32 et
> On 23 May 2019, at 15:02, Daniel Borkmann wrote:
>
> On 05/23/2019 08:38 AM, Y Song wrote:
>> On Wed, May 22, 2019 at 1:46 PM Björn Töpel wrote:
>>> On Wed, 22 May 2019 at 20:13, Y Song wrote:
On Wed, May 22, 2019 at 2:25 AM Björn Töpel wrote:
>
> Add three tests to test_veri
Alexei Starovoitov writes:
> well, it made me realize that we're probably doing it wrong,
> since after calling check_reg_arg() we need to re-parse insn encoding.
> How about we change check_reg_arg()'s enum reg_arg_type instead?
This is exactly what I had implemented in my initial internal v
Patched insns do not go through generic verification, therefore doesn't has
zero extension information collected during insn walking.
We don't bother analyze them at the moment, for any sub-register def comes
from them, just conservatively mark it as needing zero extension.
Signed-off
A new sub-register write overrides the old one.
- When propagating read64 during path pruning, also mark any insn defining
a sub-register that is read in the pruned path as full-register.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 14 +++-
kerne
insn using this new mov32 variant.
One helper function insn_is_zext is added for checking one insn is an
zero extension on dst. This will be widely used by a few JIT back-ends in
later patches in this set.
Signed-off-by: Jiong Wang
---
include/linux/filter.h | 14 ++
1 file changed
cat dump | grep -P "r.*=.*u32" | wc -l (READ_W)
cat dump | grep -P "r.*=.*u16" | wc -l (READ_H)
cat dump | grep -P "r.*=.*u8" | wc -l (READ_B)
After this patch set enabled, > 25% of those 4460 could be identified as
doesn't needing zero extension
Cc: Björn Töpel
Acked-by: Björn Töpel
Tested-by: Björn Töpel
Signed-off-by: Jiong Wang
---
arch/riscv/net/bpf_jit_comp.c | 43 ++-
1 file changed, 30 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net
Cc: Wang YanQing
Tested-by: Wang YanQing
Signed-off-by: Jiong Wang
---
arch/x86/net/bpf_jit_comp32.c | 83 +--
1 file changed, 56 insertions(+), 27 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index b29e82f
any
later insn. Such randomization is only enabled under testing mode which is
gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32".
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
include/uapi/linux/bpf.h | 18 ++
kernel/bpf/syscall.c
upport and want verifier insert zero extension explicitly.
Offload targets do not use this native target hook, instead, they could
get the optimization results using bpf_prog_offload_ops.finalize.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf.h| 1 +
include/linu
Cc: Naveen N. Rao
Cc: Sandipan Das
Signed-off-by: Jiong Wang
---
arch/powerpc/net/bpf_jit_comp64.c | 36 +---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index 21a1dcd
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Signed-off-by: Jiong Wang
---
arch/s390/net/bpf_jit_comp.c | 41 ++---
1 file changed, 34 insertions(+), 7 deletions(-)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 5e7c630..e636728
efficient that 1M call to
it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32
randomization.
Signed-off-by: Jiong Wang
---
tools/testing/selftests/bpf/test_verifier.c | 29 +++--
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/tools
This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32
is set.
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 68 ++-
1 file changed, 57 insertions(+), 11 deletions(-)
diff --git a
Cc: David S. Miller
Signed-off-by: Jiong Wang
---
arch/sparc/net/bpf_jit_comp_64.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 65428e7..3364e2a 100644
--- a/arch
stomizing program loading.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
tools/lib/bpf/bpf.c| 1 +
tools/lib/bpf/bpf.h| 1 +
tools/lib/bpf/libbpf.c | 3 +++
tools/lib/bpf/libbpf.h | 1 +
4 files changed, 6 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index
used load APIs to the test version, hence enable high
32-bit randomization for these tests without changing source code.
Besides all these, there are several testcases are using
"bpf_prog_load_attr" directly, their call sites are updated to pass
BPF_F_TEST_RND_HI32.
Signed-off-by: Ji
ed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 115 +-
drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 +
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++
3 files changed, 81 insertions(+), 48 dele
Cc: Shubham Bansal
Signed-off-by: Jiong Wang
---
arch/arm/net/bpf_jit_32.c | 42 +++---
1 file changed, 31 insertions(+), 11 deletions(-)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index c8bfbbf..97a6b4b 100644
--- a/arch/arm/net
Björn Töpel writes:
> On Fri, 24 May 2019 at 13:36, Jiong Wang wrote:
>>
>> Cc: Björn Töpel
>> Acked-by: Björn Töpel
>> Tested-by: Björn Töpel
>> Signed-off-by: Jiong Wang
>> ---
>> arch/riscv/net/bpf_jit_comp.c | 43
>> +++
On 24/05/2019 21:43, Alexei Starovoitov wrote:
On Fri, May 24, 2019 at 12:35:15PM +0100, Jiong Wang wrote:
x86_64 and AArch64 perhaps are two arches that running bpf testsuite
frequently, however the zero extension insertion pass is not enabled for
them because of their hardware support.
It is
stomizing program loading.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
tools/lib/bpf/bpf.c| 1 +
tools/lib/bpf/bpf.h| 1 +
tools/lib/bpf/libbpf.c | 3 +++
tools/lib/bpf/libbpf.h | 1 +
4 files changed, 6 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index
Cc: Björn Töpel
Acked-by: Björn Töpel
Tested-by: Björn Töpel
Signed-off-by: Jiong Wang
---
arch/riscv/net/bpf_jit_comp.c | 43 ++-
1 file changed, 30 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net
Cc: Shubham Bansal
Signed-off-by: Jiong Wang
---
arch/arm/net/bpf_jit_32.c | 42 +++---
1 file changed, 31 insertions(+), 11 deletions(-)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index c8bfbbf..97a6b4b 100644
--- a/arch/arm/net
Cc: Wang YanQing
Tested-by: Wang YanQing
Signed-off-by: Jiong Wang
---
arch/x86/net/bpf_jit_comp32.c | 83 +--
1 file changed, 56 insertions(+), 27 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index b29e82f
Cc: David S. Miller
Signed-off-by: Jiong Wang
---
arch/sparc/net/bpf_jit_comp_64.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 65428e7..3364e2a 100644
--- a/arch
Sync new bpf prog load flag "BPF_F_TEST_RND_HI32" to tools/.
Signed-off-by: Jiong Wang
---
tools/include/uapi/linux/bpf.h | 18 ++
1 file changed, 18 insertions(+)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 68d4470..7c6aef2 10
ed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 115 +-
drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 +
drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++
3 files changed, 81 insertions(+), 48 dele
set enabled, > 25% of those 4460 could be identified as
doesn't needing zero extension on the destination, and the percentage
could go further up to more than 50% with some follow up optimizations
based on the infrastructure offered by this set. This leads to
significant save on JITed
any
later insn. Such randomization is only enabled under testing mode which is
gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32".
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
include/uapi/linux/bpf.h | 18 ++
kernel/bpf/syscall.c | 4 +
insn using this new mov32 variant.
One helper function insn_is_zext is added for checking one insn is an
zero extension on dst. This will be widely used by a few JIT back-ends in
later patches in this set.
Signed-off-by: Jiong Wang
---
include/linux/filter.h | 14 ++
1 file changed
s hardware zero extension support. The peephole
could be as simple as looking the next insn, if it is a special zero
extension insn then it is safe to eliminate it if the current insn has
hardware zero extension support.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf.h
used load APIs to the test version, hence enable high
32-bit randomization for these tests without changing source code.
Besides all these, there are several testcases are using
"bpf_prog_load_attr" directly, their call sites are updated to pass
BPF_F_TEST_RND_HI32.
Signed-off-by: Ji
Cc: Naveen N. Rao
Cc: Sandipan Das
Signed-off-by: Jiong Wang
---
arch/powerpc/net/bpf_jit_comp64.c | 36 +---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index 21a1dcd
This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32
is set.
Suggested-by: Alexei Starovoitov
Signed-off-by: Jiong Wang
---
kernel/bpf/verifier.c | 68 ++-
1 file changed, 57 insertions(+), 11 deletions(-)
diff --git a
A new sub-register write overrides the old one.
- When propagating read64 during path pruning, also mark any insn defining
a sub-register that is read in the pruned path as full-register.
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 14 +++-
kerne
efficient that 1M call to
it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32
randomization.
Signed-off-by: Jiong Wang
---
tools/testing/selftests/bpf/test_verifier.c | 29 +++--
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/tools
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Signed-off-by: Jiong Wang
---
arch/s390/net/bpf_jit_comp.c | 41 ++---
1 file changed, 34 insertions(+), 7 deletions(-)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 5e7c630..e636728
Patched insns do not go through generic verification, therefore doesn't has
zero extension information collected during insn walking.
We don't bother analyze them at the moment, for any sub-register def comes
from them, just conservatively mark it as needing zero extension.
Signed-off
Alexei Starovoitov writes:
> On Fri, May 24, 2019 at 11:25:11PM +0100, Jiong Wang wrote:
>> v9:
>> - Split patch 5 in v8.
>> make bpf uapi header file sync a separate patch. (Alexei)
>
> 9th time's a charm? ;)
Yup :), it's all good things and he
It is better to centralize all sub-register zero extension checks into an
independent file.
This patch takes the first step to move existing sub-register zero
extension checks into subreg.c.
Acked-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
Signed-off-by: Jiong Wang
---
tools/testing
JIT back-ends need to guarantee high 32-bit cleared whenever one eBPF insn
write low 32-bit sub-register only. It is possible that some JIT back-ends
have failed doing this and are silently generating wrong image.
This set completes the unit tests, so bug on this could be exposed.
Jiong Wang (2
-ends failed to
guarantee the mentioned semantics, these unit tests will fail.
Acked-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
Signed-off-by: Jiong Wang
---
tools/testing/selftests/bpf/verifier/subreg.c | 516 +-
1 file changed, 505 insertions(+), 11 deletions
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
Documentation/bpf/bpf_design_QA.rst | 30 +-
1 file changed, 25 insertions(+), 5 deletions(-)
diff --git a/Documentation/bpf/bpf_design_QA.rst
b/Documentation/bpf/bpf_design_QA.rst
index cb402c5..5092a2a 100644
Song Liu writes:
> On Thu, May 30, 2019 at 12:46 AM Jiong Wang wrote:
>>
>> There has been quite a few progress around the two steps mentioned in the
>> answer to the following question:
>>
>> Q: BPF 32-bit subregister requirements
>>
>> This patc
)
v1:
- Integrated rephrase from Quentin and Jakub
Reviewed-by: Quentin Monnet
Reviewed-by: Jakub Kicinski
Signed-off-by: Jiong Wang
---
Documentation/bpf/bpf_design_QA.rst | 30 +-
1 file changed, 25 insertions(+), 5 deletions(-)
diff --git a/Documentation/bpf
Andrii Nakryiko writes:
> On Thu, Jul 11, 2019 at 4:22 AM Jiong Wang wrote:
>>
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Jul 4, 2019 at 2:31 PM Jiong Wang wrote:
>> >>
>> >> This is an RFC based on latest bpf-next about accler
Andrii Nakryiko writes:
> On Thu, Jul 11, 2019 at 4:53 AM Jiong Wang wrote:
>>
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Jul 4, 2019 at 2:32 PM Jiong Wang wrote:
>> >>
>> >> This patch introduces list based bpf insn patching in
Andrii Nakryiko writes:
> On Thu, Jul 11, 2019 at 5:20 AM Jiong Wang wrote:
>>
>>
>> Jiong Wang writes:
>>
>> > Andrii Nakryiko writes:
>> >
>> >> On Thu, Jul 4, 2019 at 2:32 PM Jiong Wang
>> >> wrote:
>>
Andrii Nakryiko writes:
> On Mon, Jul 15, 2019 at 3:02 AM Jiong Wang wrote:
>>
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Jul 11, 2019 at 5:20 AM Jiong Wang
>> > wrote:
>> >>
>> >>
>> >> Jiong Wang writes:
&
Andrii Nakryiko writes:
> On Mon, Jul 15, 2019 at 2:21 AM Jiong Wang wrote:
>>
>>
>> Andrii Nakryiko writes:
>>
>> > On Thu, Jul 11, 2019 at 4:22 AM Jiong Wang
>> > wrote:
>> >>
>> >>
>> >> Andrii Nakryik
Alexei Starovoitov writes:
> On Tue, Jul 16, 2019 at 09:50:25AM +0100, Jiong Wang wrote:
>>
>> Let me digest a little bit and do some coding, then I will come back. Some
>> issues can only shown up during in-depth coding. I kind of feel handling
>> aux reference in
Alexei Starovoitov writes:
> On Wed, Jun 12, 2019 at 4:32 AM Naveen N. Rao
> wrote:
>>
>> Currently, for constant blinding, we re-allocate the bpf program to
>> account for its new size and adjust all branches to accommodate the
>> same, for each BPF instruction that needs constant blinding. Th
Jiong Wang writes:
> Alexei Starovoitov writes:
>
>> On Wed, Jun 12, 2019 at 4:32 AM Naveen N. Rao
>> wrote:
>>>
>>> Currently, for constant blinding, we re-allocate the bpf program to
>>> account for its new size and adjust all branches to accom
Alexei Starovoitov writes:
> On Wed, Jun 12, 2019 at 8:25 AM Jiong Wang wrote:
>>
>>
>> Jiong Wang writes:
>>
>> > Alexei Starovoitov writes:
>> >
>> >> On Wed, Jun 12, 2019 at 4:32 AM Naveen N. Rao
>> >> wrote:
>> >
Edward Cree writes:
> On 14/06/2019 16:13, Jiong Wang wrote:
>> Just an update and keep people posted.
>>
>> Working on linked list based approach, the implementation looks like the
>> following, mostly a combine of discussions happened and Naveen's patch
Edward Cree writes:
> On 17/06/2019 20:59, Jiong Wang wrote:
>> Edward Cree writes:
>>
>>> On 14/06/2019 16:13, Jiong Wang wrote:
>>>> Just an update and keep people posted.
>>>>
>>>> Working on linked list based approach, the imple
Alexei Starovoitov writes:
> On Mon, Jun 17, 2019 at 1:40 PM Jiong Wang wrote:
>>
>> After digest Alexei and Andrii's reply, I still don't see the need to turn
>> branch target into list, and I am not sure whether pool based list sound
>> good? it sa
Edward Cree writes:
> On 17/06/2019 21:40, Jiong Wang wrote:
>> Now if we don't split patch when patch an insn inside patch, instead, if we
>> replace the patched insn using what you suggested, then the logic looks to
>> me becomes even more complex, something like
serve
the low 32-bit as signed integer, this is all we want.
Fixes: 2dc6b100f928 ("bpf: interpreter support BPF_ALU | BPF_ARSH")
Reported-by: Yauheni Kaliuta
Reviewed-by: Jakub Kicinski
Reviewed-by: Quentin Monnet
Signed-off-by: Jiong Wang
---
kernel/bpf/core.c | 4 ++--
1 file change
to unify them, and register all progs detected into env->subprog_starts.
This could also help simplifying some code logic.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 2 +-
kernel/bpf/verifier.c| 57
2 files changed, 32 inse
rog_end = insn_cnt;
else
subprog_end = env->subprog_info[cur_subprog + 1].start;
into:
subprog_end = env->subprog_info[cur_subprog + 1].start;
There is no functional change by this patch set.
No bpf selftest regression found after this patch set.
Jiong Wang (3
marker in subprog_info array to tell the end of
it.
We could resolve this issue by introducing a faked "ending" subprog.
The special "ending" subprog is with "insn_cnt" as start offset, so it is
serving as the end mark whenever we iterate over all subprogs.
Signed
It is better to centre all subprog information fields into one structure.
This structure could later serve as function node in call graph.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 9 ---
kernel/bpf/verifier.c| 62 +++-
2
On 01/05/2018 23:22, Alexei Starovoitov wrote:
...
[ 27.784931] ? bpf_int_jit_compile+0x7ac/0xab0
[ 27.785475] bpf_int_jit_compile+0x2b6/0xab0
[ 27.786001] ? do_jit+0x6020/0x6020
[ 27.786428] ? kasan_kmalloc+0xa0/0xd0
[ 27.786885] bpf_check+0x2c05/0x4c40
[ 27.787346] ? fixup_bpf
On 02/05/2018 18:24, John Fastabend wrote:
On 05/02/2018 09:59 AM, Jiong Wang wrote:
On 01/05/2018 23:22, Alexei Starovoitov wrote:
...
[ 27.784931] ? bpf_int_jit_compile+0x7ac/0xab0
[ 27.785475] bpf_int_jit_compile+0x2b6/0xab0
[ 27.786001] ? do_jit+0x6020/0x6020
[ 27.786428
to unify them, and register all progs detected into env->subprog_starts.
This could also help simplifying some code logic.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 2 +-
kernel/bpf/verifier.c| 57
2 files changed, 32 inse
marker in subprog_info array to tell the end of
it.
We could resolve this issue by introducing a faked "ending" subprog.
The special "ending" subprog is with "insn_cnt" as start offset, so it is
serving as the end mark whenever we iterate over all subprogs.
Signed
2:
- fixed adjust_subprog_starts to also update fake "exit" subprog start.
- for John's suggestion on renaming subprog to prog, I could work on
a follow-up patch if it is recognized as worth the change.
Jiong Wang (3):
bpf: unify main prog and subprog
bpf: centre subprog
It is better to centre all subprog information fields into one structure.
This structure could later serve as function node in call graph.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 9 ---
kernel/bpf/verifier.c| 62 +++-
2
lay the ground for further
static analysis inside eBPF verifier, for example bounded loop detection,
path-sensitive data-flow analysis etc.
Jiong Wang (10):
bpf: cfg: partition basic blocks for each subprog
bpf: cfg: add edges between basic blocks to form CFG
bpf: cfg: build domination tree
mmediately follows branch insn start a BB.
Insn immediately follows exit and within subprog start a BB.
BBs for each subprog are organized as a list in ascending head.
Two special BBs, entry and exit are added as well.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 1 +
kernel/bp
This patch centre find_subprog and add_subprog to cfg.c.
Signed-off-by: Jiong Wang
---
kernel/bpf/cfg.c | 41 +
kernel/bpf/cfg.h | 2 ++
kernel/bpf/verifier.c | 42 --
3 files changed, 43 insertions
This patch build call graph during insn scan inside check_subprogs.
Then do recursive and unreachable subprog detection using call graph.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 1 +
kernel/bpf/cfg.c | 133 +++
kernel
As we have detected loop and unreachable insns based on domination
information and call graph, there is no need of check_cfg.
This patch removes check_cfg and it's associated push_insn.
state prune heuristic marking as moved to check_subprog.
Signed-off-by: Jiong Wang
---
kerne
is built but not tested.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 3 +
kernel/bpf/cfg.c | 386 ++-
kernel/bpf/cfg.h | 3 +-
kernel/bpf/verifier.c| 5 +-
4 files changed, 393 insertions(+), 4
This patch add edges between basic blocks. Both edges for predecessors and
successors are added.
Signed-off-by: Jiong Wang
---
kernel/bpf/cfg.c | 129 +-
kernel/bpf/cfg.h | 1 +
kernel/bpf/verifier.c | 3 ++
3 files changed, 131
t;From benchmarks like test_xdp_noinline, this patch reduce peek memory usage
of new cfg infrastructure by more than 50%.
Signed-off-by: Jiong Wang
---
include/linux/bpf_verifier.h | 7 +-
kernel/bpf/cfg.c | 503 ---
kernel/bpf/cfg.h
Do unreachable basic blocks detection as a side-product of the dfs walk
when building domination information.
Signed-off-by: Jiong Wang
---
kernel/bpf/cfg.c | 31 ++-
kernel/bpf/cfg.h | 3 ++-
kernel/bpf/verifier.c | 3 ++-
3 files changed, 30 insertions
If one bb is dominating its predecessor, then there is loop.
Signed-off-by: Jiong Wang
---
kernel/bpf/cfg.c | 22 ++
kernel/bpf/cfg.h | 1 +
kernel/bpf/verifier.c | 8
3 files changed, 31 insertions(+)
diff --git a/kernel/bpf/cfg.c b/kernel/bpf/cfg.c
ation of estimated size (aligned to 2K). The pool
will grow later if space are not enough.
- There is no support on returning memory back to the pool.
Signed-off-by: Jiong Wang
---
kernel/bpf/cfg.c | 164 +-
kernel/bpf/cfg.h
On 07/05/2018 11:22, Jiong Wang wrote:
execution time
===
test_l4lb_noinline:
existing check_subprog/check_cfg: ~55000 ns
new infrastructure: ~135000 ns
test_xdp_noinline:
existing check_subprog/check_cfg: ~52000 ns
new infrastructure: ~12 ns
Intel(R) Xeon(R) CPU E5-2630 v4
1 - 100 of 408 matches
Mail list logo