On Mon, Jun 9, 2025 at 2:03 PM Eugenio Perez Martin <epere...@redhat.com> wrote: > > On Mon, Jun 9, 2025 at 3:55 AM Jason Wang <jasow...@redhat.com> wrote: > > > > On Fri, Jun 6, 2025 at 7:50 PM Eugenio Pérez <epere...@redhat.com> wrote: > > > > > > The virtqueue group is the minimal set of virtqueues that must share an > > > address space. And the address space identifier could only be attached > > > to a specific virtqueue group. The virtqueue is attached to a > > > virtqueue group for all the life of the device. > > > > > > During vDPA device allocation, the VDUSE device needs to report the > > > number of virtqueue groups supported. At this moment only vhost_vdpa is > > > able to do it. > > > > > > This helps to isolate the environments for the virtqueue that will not > > > be assigned directly. E.g in the case of virtio-net, the control > > > virtqueue will not be assigned directly to guest. > > > > > > As we need to back the vq groups with a struct device for the file > > > operations, let's keep this number as low as possible at the moment: 2. > > > We can back one VQ group with the vduse device and the other one with > > > the vdpa device. > > > > > > Signed-off-by: Eugenio Pérez <epere...@redhat.com> > > > --- > > > drivers/vdpa/vdpa_user/vduse_dev.c | 44 +++++++++++++++++++++++++++++- > > > include/uapi/linux/vduse.h | 17 +++++++++++- > > > 2 files changed, 59 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c > > > b/drivers/vdpa/vdpa_user/vduse_dev.c > > > index 6a9a37351310..6fa687bc4912 100644 > > > --- a/drivers/vdpa/vdpa_user/vduse_dev.c > > > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c > > > @@ -46,6 +46,11 @@ > > > #define VDUSE_IOVA_SIZE (VDUSE_MAX_BOUNCE_SIZE + 128 * 1024 * 1024) > > > #define VDUSE_MSG_DEFAULT_TIMEOUT 30 > > > > > > +/* > > > + * Let's make it 2 for simplicity. > > > + */ > > > +#define VDUSE_MAX_VQ_GROUPS 2 > > > + > > > #define IRQ_UNBOUND -1 > > > > > > struct vduse_virtqueue { > > > @@ -114,6 +119,7 @@ struct vduse_dev { > > > u8 status; > > > u32 vq_num; > > > u32 vq_align; > > > + u32 ngroups; > > > struct vduse_umem *umem; > > > struct mutex mem_lock; > > > unsigned int bounce_size; > > > @@ -592,6 +598,25 @@ static int vduse_vdpa_set_vq_state(struct > > > vdpa_device *vdpa, u16 idx, > > > return 0; > > > } > > > > > > +static u32 vduse_get_vq_group(struct vdpa_device *vdpa, u16 idx) > > > +{ > > > + struct vduse_dev *dev = vdpa_to_vduse(vdpa); > > > + struct vduse_dev_msg msg = { 0 }; > > > + int ret; > > > + > > > + if (dev->api_version < VDUSE_API_VERSION_1) > > > + return 0; > > > + > > > + msg.req.type = VDUSE_GET_VQ_GROUP; > > > + msg.req.vq_group.index = idx; > > > > Considering there will be a set_group_asid request, could the kernel > > cache the result so we don't need to bother with requests from > > userspace? > > > > Yes we can, actually a previous version did it. But what's the use? It > is not used in the dataplane, so we reduce complexity if we don't > store it.
It helps to reduce the chance to wait for the userspace. I think it's safer. For example, we cache device status as well, if needed userspace can update the status via ioctl(). > > What's more, in the case of the net device, the vq number -> vq group > association can change in a reset as the CVQ is either the last one or > #2 if MQ is negotiated. We need to code when to reset this > association, so complexity grows even more. And the vq group are not > asked by QEMU after that point anyway. Yes, we can have an array. E.g simulator has something like: struct vhost_iotlb *iommu; Thanks