Hi, Sorry for the late reply.
On Tuesday, August 13, 2024 12:23:55 PM GMT+5:30 Eugenio Perez Martin wrote: > [...] > > I think I have understood what's going on in "vhost_vdpa_svq_map_rings", > > "vhost_vdpa_svq_map_ring" and "vhost_vdpa_dma_map". But based on > > what I have understood it looks like the driver area is getting mapped to > > an iova which is read-only for vhost_vdpa. Please let me know where I am > > going wrong. > > You're not going wrong there. The device does not need to write into > this area, so we map it read only. > > > Consider the following implementation in hw/virtio/vhost_vdpa.c: > > > size_t device_size = vhost_svq_device_area_size(svq); > > > size_t driver_size = vhost_svq_driver_area_size(svq); > > > > The driver size includes the descriptor area and the driver area. For > > packed vq, the driver area is the "driver event suppression" structure > > which should be read-only for the device according to the virtio spec > > (section 2.8.10) [1]. > > > > > size_t avail_offset; > > > bool ok; > > > > > > vhost_svq_get_vring_addr(svq, &svq_addr); > > > > Over here "svq_addr.desc_user_addr" will point to the descriptor area > > while "svq_addr.avail_user_addr" will point to the driver area/driver > > event suppression structure. > > > > > driver_region = (DMAMap) { > > > > > > .translated_addr = svq_addr.desc_user_addr, > > > .size = driver_size - 1, > > > .perm = IOMMU_RO, > > > > > > }; > > > > This region points to the descriptor area and its size encompasses the > > driver area as well with RO permission. > > > > > ok = vhost_vdpa_svq_map_ring(v, &driver_region, errp); > > > > The above function checks the value of needle->perm and sees that it is > > RO. > > > > It then calls "vhost_vdpa_dma_map" with the following arguments: > > > r = vhost_vdpa_dma_map(v->shared, v->address_space_id, needle->iova, > > > > > > needle->size + 1, > > > (void > > > *)(uintptr_t)needle->tra > > > nslated_addr, > > > needle->perm == > > > IOMMU_RO); > > > > Since needle->size includes the driver area as well, the driver area will > > be mapped to a RO page in the device's address space, right? > > Yes, the device does not need to write into the descriptor area in the > supported split virtqueue case. So the descriptor area is also mapped > RO at this moment. > > This change in the packed virtqueue case, so we need to map it RW. I understand this now. I'll see how the implementation can be modified to take this into account. I'll see if making the driver area and descriptor ring helps. > > > if (unlikely(!ok)) { > > > > > > error_prepend(errp, "Cannot create vq driver region: "); > > > return false; > > > > > > } > > > addr->desc_user_addr = driver_region.iova; > > > avail_offset = svq_addr.avail_user_addr - svq_addr.desc_user_addr; > > > addr->avail_user_addr = driver_region.iova + avail_offset; > > > > I think "addr->desc_user_addr" and "addr->avail_user_addr" will both be > > mapped to a RO page in the device's address space. > > > > > device_region = (DMAMap) { > > > > > > .translated_addr = svq_addr.used_user_addr, > > > .size = device_size - 1, > > > .perm = IOMMU_RW, > > > > > > }; > > > > The device area/device event suppression structure on the other hand will > > be mapped to a RW page. > > > > I also think there are other issues with the current state of the patch. > > According to the virtio spec (section 2.8.10) [1], the "device event > > suppression" structure needs to be write-only for the device but is > > mapped to a RW page. > > Yes, I'm not sure if all IOMMU supports write-only maps to be honest. Got it. I think it should be alright to defer this issue until later. > > Another concern I have is regarding the driver area size for packed vq. In > > > > "hw/virtio/vhost-shadow-virtqueue.c" of the current patch: > > > size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq) > > > { > > > > > > size_t desc_size = sizeof(vring_desc_t) * svq->vring.num; > > > size_t avail_size = offsetof(vring_avail_t, ring[svq->vring.num]) + > > > > > > sizeof(uin > > > t16_t); > > > > > > return ROUND_UP(desc_size + avail_size, qemu_real_host_page_size()); > > > > > > } > > > > > > [...] > > > > > > size_t vhost_svq_memory_packed(const VhostShadowVirtqueue *svq) > > > { > > > > > > size_t desc_size = sizeof(struct vring_packed_desc) * svq->num_free; > > > size_t driver_event_suppression = sizeof(struct > > > vring_packed_desc_event); > > > size_t device_event_suppression = sizeof(struct > > > vring_packed_desc_event); > > > > > > return ROUND_UP(desc_size + driver_event_suppression + > > > device_event_suppression,> > > > > qemu_real_host_page_size()); > > > > > > } > > > > The size returned by "vhost_svq_driver_area_size" might not be the actual > > driver size which is given by desc_size + driver_event_suppression, > > right? Will this have to be changed too? > > Yes, you're right this needs to be changed too. Understood. I'll modify this too. I have been trying to test my changes so far as well. I am not very clear on a few things. Q1. I built QEMU from source with my changes and followed the vdpa_sim + vhost_vdpa tutorial [1]. The VM seems to be running fine. How do I check if the packed format is being used instead of the split vq format for shadow virtqueues? I know the packed format is used when virtio_vdev has got the VIRTIO_F_RING_PACKED bit enabled. Is there a way of checking that this is the case? Q2. What's the recommended way to see what's going on under the hood? I tried using the -D option so QEMU's logs are written to a file but the file was empty. Would using qemu with -monitor stdio or attaching gdb to the QEMU VM be worthwhile? Thanks, Sahil [1] https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-hardware-part-1