On Wed, Jun 21, 2017 at 12:51 PM, Lawrence Brakmo <bra...@fb.com> wrote: > > On 6/20/17, 2:25 PM, "Craig Gallek" <kraigatg...@gmail.com> wrote: > > On Mon, Jun 19, 2017 at 11:00 PM, Lawrence Brakmo <bra...@fb.com> wrote: > > Added support for calling a subset of socket setsockopts from > > BPF_PROG_TYPE_SOCK_OPS programs. The code was duplicated rather > > than making the changes to call the socket setsockopt function because > > the changes required would have been larger. > > > > @@ -2671,6 +2672,69 @@ static const struct bpf_func_proto > bpf_get_socket_uid_proto = { > > .arg1_type = ARG_PTR_TO_CTX, > > }; > > > > +BPF_CALL_5(bpf_setsockopt, struct bpf_sock_ops_kern *, bpf_sock, > > + int, level, int, optname, char *, optval, int, optlen) > > +{ > > + struct sock *sk = bpf_sock->sk; > > + int ret = 0; > > + int val; > > + > > + if (bpf_sock->is_req_sock) > > + return -EINVAL; > > + > > + if (level == SOL_SOCKET) { > > + /* Only some socketops are supported */ > > + val = *((int *)optval); > > + > > + switch (optname) { > > + case SO_RCVBUF: > > + sk->sk_userlocks |= SOCK_RCVBUF_LOCK; > > + sk->sk_rcvbuf = max_t(int, val * 2, > SOCK_MIN_RCVBUF); > > + break; > > + case SO_SNDBUF: > > + sk->sk_userlocks |= SOCK_SNDBUF_LOCK; > > + sk->sk_sndbuf = max_t(int, val * 2, > SOCK_MIN_SNDBUF); > > + break; > > + case SO_MAX_PACING_RATE: > > + sk->sk_max_pacing_rate = val; > > + sk->sk_pacing_rate = min(sk->sk_pacing_rate, > > + > sk->sk_max_pacing_rate); > > + break; > > + case SO_PRIORITY: > > + sk->sk_priority = val; > > + break; > > + case SO_RCVLOWAT: > > + if (val < 0) > > + val = INT_MAX; > > + sk->sk_rcvlowat = val ? : 1; > > + break; > > + case SO_MARK: > > + sk->sk_mark = val; > > + break; > > Isn't the socket lock required when manipulating these fields? It's > not obvious that the lock is held from every bpf hook point that could > trigger this function... > > The sock_ops BPF programs are being called from within the network > stack and my understanding is that lock has already been taken. > Currently they are only called: > (1) after a packet is received, where there is the call to > bh_lock_sock_nested() in tcp_v4_rcv() before calling > tcp_v4_do_rcv(). > (2) in tcp_connect(), where there should be no issue Someone who understands the TCP stack better than I should verify this, but even if it's OK to do in these specific spots, it's not unreasonable to believe that someone will add another socket-context bpf hook in the future where it would not be safe. Without some additional check to prevent this setsockopt function from being called in those spots, we could run into trouble. The only other socket-context point currently is the cgroup one, which happens during socket creation and should also be safe.
> Just in case I added a check “sock_owned_by_me(sk)” in tcp_call_bpf() > Do you think this is enough, or should I explicitly add a bh_lock_sock_nested > in the bpf_setsockopt function? Adding the check is certainly a way to test the behavior as implemented, but this bpf function could be called by any socket-context bpf (not just the tcp_call_bpf ones). I believe the current bpf hook points only guarantee RCU read-side lock. Adding an additional lock guarantee may have undesirable performance implications. If this is just for socket creation or other rare events it's probably not a big deal, but if it's for a hook in the fast path it's probably a non-starter. I guess the higher level question is what should the locking guarantees for socket-context bpf programs be?