From: Breno Leitao <lei...@debian.org>

[ Upstream commit f8321fa75102246d7415a6af441872f6637c93ab ]

After the commit bdacf3e34945 ("net: Use nested-BH locking for
napi_alloc_cache.") was merged, the following warning began to appear:

         WARNING: CPU: 5 PID: 1 at net/core/skbuff.c:1451 
napi_skb_cache_put+0x82/0x4b0

          __warn+0x12f/0x340
          napi_skb_cache_put+0x82/0x4b0
          napi_skb_cache_put+0x82/0x4b0
          report_bug+0x165/0x370
          handle_bug+0x3d/0x80
          exc_invalid_op+0x1a/0x50
          asm_exc_invalid_op+0x1a/0x20
          __free_old_xmit+0x1c8/0x510
          napi_skb_cache_put+0x82/0x4b0
          __free_old_xmit+0x1c8/0x510
          __free_old_xmit+0x1c8/0x510
          __pfx___free_old_xmit+0x10/0x10

The issue arises because virtio is assuming it's running in NAPI context
even when it's not, such as in the netpoll case.

To resolve this, modify virtnet_poll_tx() to only set NAPI when budget
is available. Same for virtnet_poll_cleantx(), which always assumed that
it was in a NAPI context.

Fixes: df133f3f9625 ("virtio_net: bulk free tx skbs")
Suggested-by: Jakub Kicinski <k...@kernel.org>
Signed-off-by: Breno Leitao <lei...@debian.org>
Reviewed-by: Jakub Kicinski <k...@kernel.org>
Acked-by: Michael S. Tsirkin <m...@redhat.com>
Acked-by: Jason Wang <jasow...@redhat.com>
Reviewed-by: Heng Qi <hen...@linux.alibaba.com>
Link: https://patch.msgid.link/20240712115325.54175-1-lei...@debian.org
Signed-off-by: Jakub Kicinski <k...@kernel.org>
Signed-off-by: Sasha Levin <sas...@kernel.org>
[Shivani: Modified to apply on v4.19.y-v5.10.y]
Signed-off-by: Shivani Agarwal <shivani.agar...@broadcom.com>
---
 drivers/net/virtio_net.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index f7ed99561..99dea89b2 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1497,7 +1497,7 @@ static bool is_xdp_raw_buffer_queue(struct virtnet_info 
*vi, int q)
                return false;
 }
 
-static void virtnet_poll_cleantx(struct receive_queue *rq)
+static void virtnet_poll_cleantx(struct receive_queue *rq, int budget)
 {
        struct virtnet_info *vi = rq->vq->vdev->priv;
        unsigned int index = vq2rxq(rq->vq);
@@ -1508,7 +1508,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
                return;
 
        if (__netif_tx_trylock(txq)) {
-               free_old_xmit_skbs(sq, true);
+               free_old_xmit_skbs(sq, !!budget);
                __netif_tx_unlock(txq);
        }
 
@@ -1525,7 +1525,7 @@ static int virtnet_poll(struct napi_struct *napi, int 
budget)
        unsigned int received;
        unsigned int xdp_xmit = 0;
 
-       virtnet_poll_cleantx(rq);
+       virtnet_poll_cleantx(rq, budget);
 
        received = virtnet_receive(rq, budget, &xdp_xmit);
 
@@ -1598,7 +1598,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int 
budget)
        txq = netdev_get_tx_queue(vi->dev, index);
        __netif_tx_lock(txq, raw_smp_processor_id());
        virtqueue_disable_cb(sq->vq);
-       free_old_xmit_skbs(sq, true);
+       free_old_xmit_skbs(sq, !!budget);
 
        opaque = virtqueue_enable_cb_prepare(sq->vq);
 
-- 
2.39.4


Reply via email to