On 2/28/25 06:51, Jiayuan Chen wrote:
> ...
>  static void sk_psock_verdict_data_ready(struct sock *sk)
>  {
> -     struct socket *sock = sk->sk_socket;
> +     struct socket *sock;
>       const struct proto_ops *ops;
>       int copied;
>  
>       trace_sk_data_ready(sk);
>  
> +     /* We need RCU to prevent the sk_socket from being released.
> +      * Especially for Unix sockets, we are currently in the process
> +      * context and do not have RCU protection.
> +      */
> +     rcu_read_lock();
> +     sock = sk->sk_socket;
>       if (unlikely(!sock))
> -             return;
> +             goto unlock;
> +
>       ops = READ_ONCE(sock->ops);
>       if (!ops || !ops->read_skb)
> -             return;
> +             goto unlock;
> +
>       copied = ops->read_skb(sk, sk_psock_verdict_recv);
>       if (copied >= 0) {
>               struct sk_psock *psock;
>  
> -             rcu_read_lock();
>               psock = sk_psock(sk);
>               if (psock)
>                       sk_psock_data_ready(sk, psock);
> -             rcu_read_unlock();
>       }
> +unlock:
> +     rcu_read_unlock();
>  }

Hi,

Doesn't sk_psock_handle_skb() (!ingress path) have the same `struct socket`
release race issue? Any plans on fixing that one, too?

BTW, lockdep (CONFIG_LOCKDEP=y) complains about calling AF_UNIX's
read_skb() under RCU read lock.

Thanks,
Michal

Reply via email to