On Fri, Jan 12, 2018 at 06:59:52PM -0800, Alexei Starovoitov wrote:
> due to some JITs doing if (src_reg == 0) check in 64-bit mode
> for div/mod opreations mask upper 32-bits of src register
> before doing the check
> 
> Fixes: 622582786c9e ("net: filter: x86: internal BPF JIT")
> Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.")
> Reported-by: syzbot+48340bb518e88849e...@syzkaller.appspotmail.com
> Signed-off-by: Alexei Starovoitov <a...@kernel.org>
> ---
> arm64 jit seems to be ok
> haven't analyzed other JITs

s390 looks ok
mips64 looks buggy
arm32 ebpf jit doesn't have if src == 0 check
powerpc looks ok

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 20eb04fd155e..b7448347e6b6 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4445,6 +4445,24 @@ static int fixup_bpf_calls(struct bpf_verifier_env 
> *env)
>       int i, cnt, delta = 0;
>  
>       for (i = 0; i < insn_cnt; i++, insn++) {
> +             if (insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
> +                 insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
> +                     /* due to JIT bugs clear upper 32-bits of src register
> +                      * before div/mod operation
> +                      */
> +                     insn_buf[0] = BPF_MOV32_REG(insn->src_reg, 
> insn->src_reg);
> +                     insn_buf[1] = *insn;

long term such mask allows us to insert
'if (src_reg == 0) goto error'
by the verifier and remove corresponding branches from JITs.
Without mask such comparison is not possible, because eBPF doesn't
have 32-bit compare and jump instructions.
Furthermore the verifier tracks values in the registers and in many
cases knows that src_reg cannot be zero, so insertion of
'if (src_reg == 0)' safety check can be conditional.

> diff --git a/net/core/filter.c b/net/core/filter.c
> index d339ef170df6..1c0eb436671f 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -458,6 +458,10 @@ static int bpf_convert_filter(struct sock_filter *prog, 
> int len,
>                           convert_bpf_extensions(fp, &insn))
>                               break;
>  
> +                     if (fp->code == (BPF_ALU | BPF_DIV | BPF_X) ||
> +                         fp->code == (BPF_ALU | BPF_MOD | BPF_X))
> +                             *insn++ = BPF_MOV32_REG(BPF_REG_X, BPF_REG_X);
> +
>                       *insn = BPF_RAW_INSN(fp->code, BPF_REG_A, BPF_REG_X, 0, 
> fp->k);

this hunk is not strictly necessary, since in classic->extended conversion all
operations are 32-bit and BPF_REG_X will have upper 32-bit cleared before 
div/mod,
so buggy JITs will be fine, but div/mod are slow anyway and extra bpf_mov32_reg
won't hurt performance, so I prefer to keep this hunk to have less things to
worry about.

Reply via email to