On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> @@ -2317,6 +2318,213 @@ static int tcp_inq_hint(struct sock *sk)
>       return inq;
>  }
>  
> +/* batch __xa_alloc() calls and reduce xa_lock()/xa_unlock() overhead. */
> +struct tcp_xa_pool {
> +     u8              max; /* max <= MAX_SKB_FRAGS */
> +     u8              idx; /* idx <= max */
> +     __u32           tokens[MAX_SKB_FRAGS];
> +     netmem_ref      netmems[MAX_SKB_FRAGS];
> +};
> +
> +static void tcp_xa_pool_commit(struct sock *sk, struct tcp_xa_pool *p,
> +                            bool lock)
> +{
> +     int i;
> +
> +     if (!p->max)
> +             return;
> +     if (lock)
> +             xa_lock_bh(&sk->sk_user_frags);

The conditional lock here confuses sparse.

I think you can avoid it providing a unlocked version (no need to check
for '!p->max' the only caller wanting the unlocked version already
performs such check) and a locked one, calling the other.

Cheers,

Paolo

Reply via email to