On 6/28/17 10:31 AM, Lawrence Brakmo wrote:
+#ifdef CONFIG_BPF
+static inline int tcp_call_bpf(struct sock *sk, bool is_req_sock, int op)
+{
+       struct bpf_sock_ops_kern sock_ops;
+       int ret;
+
+       if (!is_req_sock)
+               sock_owned_by_me(sk);
+
+       memset(&sock_ops, 0, sizeof(sock_ops));
+       sock_ops.sk = sk;
+       sock_ops.is_req_sock = is_req_sock;
+       sock_ops.op = op;
+
+       ret = BPF_CGROUP_RUN_PROG_SOCK_OPS(&sock_ops);
+       if (ret == 0)
+               ret = sock_ops.reply;
+       else
+               ret = -1;
+       return ret;
+}

the switch to cgroup attached only made it really nice and clean.
No global state to worry about.
I haven't looked through the minor patch details, but overall
it all looks good to me. I don't have any architectural concerns.

Acked-by: Alexei Starovoitov <a...@kernel.org>

Reply via email to