On Fri, Dec 26, 2025 at 09:52:48PM +0100, Matias Ezequiel Vara Larsen wrote:
> > > > > +static int virtio_can_read_tx_queue(struct virtqueue *vq)
> > > > > +{
> > > > > + struct virtio_can_priv *can_priv = vq->vdev->priv;
> > > > > + struct net_device *dev = can_priv->dev;
> > > > > + struct virtio_can_tx *can_tx_msg;
> > > > > + struct net_device_stats *stats;
> > > > > + unsigned long flags;
> > > > > + unsigned int len;
> > > > > + u8 result;
> > > > > +
> > > > > + stats = &dev->stats;
> > > > > +
> > > > > + /* Protect list and virtio queue operations */
> > > > > + spin_lock_irqsave(&can_priv->tx_lock, flags);
> > > >
> > > > The section below seems a pretty big one to protect behind a spin lock.
> > > >
> > >
> > > How can I split it?
> > >
> >
> > Question here is: what needs to be protected? As far as I can tell, the
> > only entity needing some kind of locking here is the queue, while both
> > ida_* and tx_inflight operations are already covered (the former by
> > design [1], the second because it's implemented using an atomic.
> >
> > If I'm not wrong (but I might be, so please double check) this can be
> > limited to:
> >
> > /* Protect queue operations */
> > scoped_guard(spinlock_irqsave, &priv->tx_lock)
> > err = virtqueue_add_sgs(vq, sgs, 1u, 1u, can_tx_msg,
> > GFP_ATOMIC);
> >
> >
> > Maybe the whole locking pattern is a leftover from a previous version,
> > where a list of TX messages was kept?
> >
>
> I followed this approach for the three queues. I wonder why the rx queue
> and the ctrl queue use a mutex instead of spinlock? I added a mutex for
> the operations to the rx queue.
If I interpreted correctly (but maybe Harald can shine some light on this):
- the virtio_can_send_ctrl_msg() uses a mutex because it could be
executed in parallel by different actors (read: userspace processes
invoking up and down operations); moreover, is the whole control
operation that needs to be protected and not only the access to the
virtqueue_ functions, so a mutex should make sense;
- the rx queue shouldn't actually need any locking, since it is accessed
only by the poll function and the network framework should guarantee
that no multiple parallel poll operations are called on a napi;
- the tx queue needs instead to be protected, since it could
concurrently be accessed by both the start_xmit callback and the poll
function; a spinlock there makes more sense, as accesses should be
very short.
>
> Matias
>
>
Thank you
Regards,
Francesco