On Fri, Jan 25, 2019 at 01:18:04AM +0100, Jann Horn wrote:
> On Fri, Jan 25, 2019 at 12:59 AM Alexei Starovoitov
> <alexei.starovoi...@gmail.com> wrote:
> > On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote:
> > > Thanks for having kernel/locking people on Cc...
> > >
> > > On Wed, Jan 23, 2019 at 08:13:55PM -0800, Alexei Starovoitov wrote:
> > >
> > > > Implementation details:
> > > > - on !SMP bpf_spin_lock() becomes nop
> > >
> > > Because no BPF program is preemptible? I don't see any assertions or
> > > even a comment that says this code is non-preemptible.
> > >
> > > AFAICT some of the BPF_RUN_PROG things are under rcu_read_lock() only,
> > > which is not sufficient.
> >
> > nope. all bpf prog types disable preemption. That is must have for all
> > sorts of things to work properly.
> > If there is a prog type that doing rcu_read_lock only it's a serious bug.
> > About a year or so ago we audited everything specifically to make
> > sure everything disables preemption before calling bpf progs.
> > I'm pretty sure nothing crept in in the mean time.
> 
> Hmm? What about
> unix_dgram_sendmsg->sk_filter->sk_filter_trim_cap->bpf_prog_run_save_cb->BPF_PROG_RUN?
> That just holds rcu_read_lock(), as far as I can tell...

Looking into it.
First reaction is per-cpu maps and bpf_get_smp_processor_id/numa_id
will return bogus values for sender attached socket socket filters
in CONFIG_PREEMPT kernels. Receive side is in bh.
Not a security issue, but something we have to fix.

Reply via email to