On Fri, 2016-10-21 at 13:55 +0200, Paolo Abeni wrote:
> Avoid using the generic helpers.
> Use the receive queue spin lock to protect the memory
> accounting operation, both on enqueue and on dequeue.
> 
> On dequeue perform partial memory reclaiming, trying to
> leave a quantum of forward allocated memory.
> 
> On enqueue use a custom helper, to allow some optimizations:
> - use a plain spin_lock() variant instead of the slightly
>   costly spin_lock_irqsave(),
> - avoid dst_force check, since the calling code has already
>   dropped the skb dst
> - avoid orphaning the skb, since skb_steal_sock() already did
>   the work for us

>  
> +static void udp_rmem_release(struct sock *sk, int size, int partial)
> +{
> +     int amt;
> +
> +     atomic_sub(size, &sk->sk_rmem_alloc);
> +
> +     spin_lock_bh(&sk->sk_receive_queue.lock);
> +     sk->sk_forward_alloc += size;
> +     amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1);
> +     sk->sk_forward_alloc -= amt;
> +     spin_unlock_bh(&sk->sk_receive_queue.lock);
> +
> +     if (amt)
> +             __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT);
> +}
> +
> +static void udp_rmem_free(struct sk_buff *skb)
> +{
> +     udp_rmem_release(skb->sk, skb->truesize, 1);
> +}
> +


It looks like you are acquiring/releasing sk_receive_queue.lock twice
per packet in recvmsg() (the second time in the destructor above)

We could do slightly better if :

We do not set skb->destructor at all, and manage
sk_rmem_alloc/sk_forward_alloc changes at the time we dequeue skb
 (if !MSG_PEEK), before copy to user space.





Reply via email to