On Sat, 05 Oct 2019 12:29:14 +0200, Toke Høiland-Jørgensen wrote:
> >> +static int bpf_inject_chain_calls(struct bpf_verifier_env *env)
> >> +{
> >> +  struct bpf_prog *prog = env->prog;
> >> +  struct bpf_insn *insn = prog->insnsi;
> >> +  int i, cnt, delta = 0, ret = -ENOMEM;
> >> +  const int insn_cnt = prog->len;
> >> +  struct bpf_array *prog_array;
> >> +  struct bpf_prog *new_prog;
> >> +  size_t array_size;
> >> +
> >> +  struct bpf_insn call_next[] = {
> >> +          BPF_LD_IMM64(BPF_REG_2, 0),
> >> +          /* Save real return value for later */
> >> +          BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
> >> +          /* First try tail call with index ret+1 */
> >> +          BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),  
> >
> > Don't we need to check against the max here, and spectre-proofing
> > here?  
> 
> No, I don't think so. This is just setting up the arguments for the
> BPF_TAIL_CALL instruction below. The JIT will do its thing with that and
> emit the range check and the retpoline stuff...

Sorry, wrong CPU bug, I meant Meltdown :)

https://elixir.bootlin.com/linux/v5.4-rc1/source/kernel/bpf/verifier.c#L9029

> >> +          BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 1),
> >> +          BPF_RAW_INSN(BPF_JMP | BPF_TAIL_CALL, 0, 0, 0, 0),
> >> +          /* If that doesn't work, try with index 0 (wildcard) */
> >> +          BPF_MOV64_IMM(BPF_REG_3, 0),
> >> +          BPF_RAW_INSN(BPF_JMP | BPF_TAIL_CALL, 0, 0, 0, 0),
> >> +          /* Restore saved return value and exit */
> >> +          BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
> >> +          BPF_EXIT_INSN()
> >> +  };  

Reply via email to