On Fri, Dec 26, 2025 at 11:22:38PM +0100, Francesco Valla wrote:
> On Fri, Dec 26, 2025 at 09:52:48PM +0100, Matias Ezequiel Vara Larsen wrote:
> > > > > > +static int virtio_can_read_tx_queue(struct virtqueue *vq)
> > > > > > +{
> > > > > > +   struct virtio_can_priv *can_priv = vq->vdev->priv;
> > > > > > +   struct net_device *dev = can_priv->dev;
> > > > > > +   struct virtio_can_tx *can_tx_msg;
> > > > > > +   struct net_device_stats *stats;
> > > > > > +   unsigned long flags;
> > > > > > +   unsigned int len;
> > > > > > +   u8 result;
> > > > > > +
> > > > > > +   stats = &dev->stats;
> > > > > > +
> > > > > > +   /* Protect list and virtio queue operations */
> > > > > > +   spin_lock_irqsave(&can_priv->tx_lock, flags);
> > > > > 
> > > > > The section below seems a pretty big one to protect behind a spin 
> > > > > lock. 
> > > > > 
> > > > 
> > > > How can I split it? 
> > > > 
> > > 
> > > Question here is: what needs to be protected? As far as I can tell, the
> > > only entity needing some kind of locking here is the queue, while both
> > > ida_* and tx_inflight operations are already covered (the former by
> > > design [1], the second because it's implemented using an atomic.
> > > 
> > > If I'm not wrong (but I might be, so please double check) this can be
> > > limited to:
> > > 
> > >   /* Protect queue operations */
> > >   scoped_guard(spinlock_irqsave, &priv->tx_lock)
> > >           err = virtqueue_add_sgs(vq, sgs, 1u, 1u, can_tx_msg, 
> > > GFP_ATOMIC);
> > > 
> > > 
> > > Maybe the whole locking pattern is a leftover from a previous version, 
> > > where a list of TX messages was kept?
> > > 
> > 
> > I followed this approach for the three queues. I wonder why the rx queue
> > and the ctrl queue use a mutex instead of spinlock? I added a mutex for
> > the operations to the rx queue.
> 
> If I interpreted correctly (but maybe Harald can shine some light on this):
> 

Thanks for the explanation.

> - the virtio_can_send_ctrl_msg() uses a mutex because it could be
>   executed in parallel by different actors (read: userspace processes
>   invoking up and down operations); moreover, is the whole control
>   operation that needs to be protected and not only the access to the
>   virtqueue_ functions, so a mutex should make sense;


In the code, a comment says that this lock may not be required. The lock
is used in virtio_can_send_ctrl_msg(), which is called only from
virtio_can_start() and virtio_can_stop(). But, the sending of control
msgs is sync,i.e., the driver first sends and then waits for a response
thus blocking the access to the control queue. This semantics requires a
mutex. I do not know if access to control queue may be implemented
differently thus preventing to keep the mutex during the whole access. 


> - the rx queue shouldn't actually need any locking, since it is accessed
>   only by the poll function and the network framework should guarantee
>   that no multiple parallel poll operations are called on a napi;
> - the tx queue needs instead to be protected, since it could
>   concurrently be accessed by both the start_xmit callback and the poll
>   function; a spinlock there makes more sense, as accesses should be
>   very short.
> 
> > 
> > Matias
> > 
> > 
> 
> Thank you
> 
> Regards,
> Francesco
> 


Reply via email to