On Wed, Mar 05, 2025 at 06:21:18PM -0800, Jakub Kicinski wrote: > On Wed, 5 Mar 2025 17:42:35 -0800 Joe Damato wrote: > > Two spots that come to mind are: > > - in virtnet_probe where all the other netdev ops are plumbed > > through, or > > - above virtnet_disable_queue_pair which I assume a future queue > > API implementor would need to call for ndo_queue_stop > > I'd put it next to some call which will have to be inspected. > Normally we change napi_disable() to napi_disable_locked() > for drivers using the instance lock, so maybe on the napi_disable() > line in the refill?
Sure, that seems reasonable to me. Does this comment seem reasonable? I tried to distill what you said in your previous message (thanks for the guidance, btw): diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index d6c8fe670005..fe5f6313d422 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2883,6 +2883,18 @@ static void refill_work(struct work_struct *work) for (i = 0; i < vi->curr_queue_pairs; i++) { struct receive_queue *rq = &vi->rq[i]; + /* + * When queue API support is added in the future and the call + * below becomes napi_disable_locked, this driver will need to + * be refactored. + * + * One possible solution would be to: + * - cancel refill_work with cancel_delayed_work (note: non-sync) + * - cancel refill_work with cancel_delayed_work_sync in + * virtnet_remove after the netdev is unregistered + * - wrap all of the work in a lock (perhaps vi->refill_lock?) + * - check netif_running() and return early to avoid a race + */