On 11/25/24 17:28, David Marchand wrote:
On Mon, Nov 25, 2024 at 5:20 PM Maxime Coquelin
<maxime.coque...@redhat.com> wrote:
On 11/25/24 12:14, David Marchand wrote:
On Mon, Nov 18, 2024 at 5:24 PM Stephen Hemminger
<step...@networkplumber.org> wrote:

If lock is acquired for write, it must be released for write
or a deadlock is likely.

Bugzilla ID: 1582
Fixes: 9fc93a1e2320 ("vhost: fix virtqueue access check in datapath")
Cc: david.march...@redhat.com
Cc: sta...@dpdk.org

Signed-off-by: Stephen Hemminger <step...@networkplumber.org>
---
   lib/vhost/virtio_net.c | 2 +-
   1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 298a5dae74..d764d4bc6a 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -2538,7 +2538,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct 
vhost_virtqueue *vq,

          if (unlikely(!vq->access_ok)) {
                  vhost_user_iotlb_rd_unlock(vq);
-               rte_rwlock_read_unlock(&vq->access_lock);
+               rte_rwlock_write_unlock(&vq->access_lock);

A write lock is taken earlier, because virtio_dev_rx_async_submit_*
need it for access to vq->async (as opposed to the sync code that only
takes read lock).

Here, no need to release/take again all locks.
A simpler fix is to directly call vring_translate(dev, vq).



Ok, so both solutions are correct.

David's one is more optimized, but this is a corner case in the async
datapath so not really critical.

On the other hand, Stephen's solution keeps the same pattern as the
other datapaths.

Ok, it avoids having a special case, and I prefer limiting special cases too.

I'll send a RFC for new lock annotations (for catching such bug) for
the next release.

Great, thanks for that!



I'd go with Stephen's solution, but would change the commit title to:

vhost: fix deadlock in Rx async path

With this change:

Reviewed-by: Maxime Coquelin <maxime.coque...@redhat.com>

Yes, the title needed some work.



Ack, will change while applying.

Thanks,
Maxime

Reply via email to