Hi,

On 3/26/25 2:04 PM, Eugenio Perez Martin wrote:
On Mon, Mar 24, 2025 at 3:34 PM Sahil Siddiq <icegambi...@gmail.com> wrote:

Hi,

I had a few more queries here as well.

On 3/24/25 7:29 PM, Sahil Siddiq wrote:
Detect when used descriptors are ready for consumption by the guest via
packed virtqueues and forward them from the device to the guest.

Signed-off-by: Sahil Siddiq <sahil...@proton.me>
---
Changes from v4 -> v5:
- New commit.
- vhost-shadow-virtqueue.c:
    (vhost_svq_more_used): Split into vhost_svq_more_used_split and
    vhost_svq_more_used_packed.
    (vhost_svq_enable_notification): Handle split and packed vqs.
    (vhost_svq_disable_notification): Likewise.
    (vhost_svq_get_buf): Split into vhost_svq_get_buf_split and
    vhost_svq_get_buf_packed.
    (vhost_svq_poll): Use new functions.

   hw/virtio/vhost-shadow-virtqueue.c | 121 ++++++++++++++++++++++++++---
   1 file changed, 110 insertions(+), 11 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
b/hw/virtio/vhost-shadow-virtqueue.c
index 126957231d..8430b3c94a 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -463,7 +463,7 @@ static void vhost_handle_guest_kick_notifier(EventNotifier 
*n)
       vhost_handle_guest_kick(svq);
   }

-static bool vhost_svq_more_used(VhostShadowVirtqueue *svq)
+static bool vhost_svq_more_used_split(VhostShadowVirtqueue *svq)
   {
       uint16_t *used_idx = &svq->vring.used->idx;
       if (svq->last_used_idx != svq->shadow_used_idx) {
@@ -475,6 +475,22 @@ static bool vhost_svq_more_used(VhostShadowVirtqueue *svq)
       return svq->last_used_idx != svq->shadow_used_idx;
   }

+static bool vhost_svq_more_used_packed(VhostShadowVirtqueue *svq)
+{
+    bool avail_flag, used_flag, used_wrap_counter;
+    uint16_t last_used_idx, last_used, flags;
+
+    last_used_idx = svq->last_used_idx;
+    last_used = last_used_idx & ~(1 << VRING_PACKED_EVENT_F_WRAP_CTR);

In the linux kernel, last_used is calculated as:

last_used_idx & ~(-(1 << VRING_PACKED_EVENT_F_WRAP_CTR))

...instead of...

last_used_idx & ~(1 << VRING_PACKED_EVENT_F_WRAP_CTR)

Isn't the second option good enough if last_used_idx is uint16_t
and VRING_PACKED_EVENT_F_WRAP_CTR is defined as 15.


I think it is good enough with the u16 restrictions but it's just
defensive code.


Got it. I think it'll be better then to follow the implementation in
the kernel to keep it more robust.

+    used_wrap_counter = !!(last_used_idx & (1 << 
VRING_PACKED_EVENT_F_WRAP_CTR));
+
+    flags = le16_to_cpu(svq->vring_packed.vring.desc[last_used].flags);
+    avail_flag = !!(flags & (1 << VRING_PACKED_DESC_F_AVAIL));
+    used_flag = !!(flags & (1 << VRING_PACKED_DESC_F_USED));
+
+    return avail_flag == used_flag && used_flag == used_wrap_counter;
+}
+

Also in the implementation of vhost_svq_more_used_split() [1], I haven't
understood why the following condition:

svq->last_used_idx != svq->shadow_used_idx

is checked before updating the value of "svq->shadow_used_idx":

svq->shadow_used_idx = le16_to_cpu(*(volatile uint16_t *)used_idx)


As far as I know this is used to avoid concurrent access to guest's
used_idx, avoiding cache sharing, the memory barrier, and the
potentially costly volatile access.


By concurrent access, do you mean in case one thread has already updated
the value of used_idx?

Thanks,
Sahil

Reply via email to