On Wed, Sep 15, 2021 at 08:59:17PM -0700, Jiang Wang . wrote: > On Tue, Sep 14, 2021 at 5:46 AM Stefan Hajnoczi <stefa...@redhat.com> wrote: > > On Mon, Sep 13, 2021 at 10:18:43PM +0000, Jiang Wang wrote: > > > diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c > > > index 6095ed7349..e9ec0e1c00 100644 > > > --- a/hw/virtio/vhost-user-vsock.c > > > +++ b/hw/virtio/vhost-user-vsock.c > > > @@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, > > > Error **errp) > > > return; > > > } > > > > > > - vhost_vsock_common_realize(vdev, "vhost-user-vsock"); > > > + vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); > > > > VIRTIO_VSOCK_F_DGRAM support should work equally well for > > vhost-user-vsock. I don't think there is a need to disable it here. > > > Stefano mentioned in previous email ( V3 ) that I can disable dgram > for vhost-user-vsock for now. I think we need some extra changes to > fully support vhost-vsock-user, like feature discovery?
I'm not sure why enabling it on the QEMU side poses a problem? VIRTIO feature negotiation will disable it (if the feature bit is included in user_feature_bits[]) if the vhost-user device backend lacks support. > > > /* The event queue belongs to QEMU */ > > > vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > > > vhost_vsock_common_handle_output); > > > > > > - vvc->vhost_dev.nvqs = ARRAY_SIZE(vvc->vhost_vqs); > > > - vvc->vhost_dev.vqs = vvc->vhost_vqs; > > > + vvc->vhost_dev.nvqs = nvqs; > > > + vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, > > > vvc->vhost_dev.nvqs); > > > > IIUC the number of virtqueues needs to be the maximum supported number. > > It's okay to have more virtqueues than needed. The feature bit is used > > by the driver to detect the device's support for dgrams, not the number > > of virtqueues. > > > OK. I can just make these changes. But I am not able to test vhost-user-vsock > as of now. I tried to follow the steps on here: > https://patchew.org/QEMU/20200515152110.202917-1-sgarz...@redhat.com/ > But the git repo for the vhost-user-vsock is kind of broken. I tried to > fix it but I am new to rust so it will take some time. Is there any updated > or an easier way to test vhost-user-vsock? Harsha or Stefano can help with this. > > > @@ -132,13 +138,34 @@ static const VMStateDescription > > > vmstate_virtio_vhost_vsock = { > > > .post_load = vhost_vsock_common_post_load, > > > }; > > > > > > +static bool vhost_vsock_dgram_supported(int vhostfd, Error **errp) > > > +{ > > > + uint64_t features; > > > + int ret; > > > + > > > + ret = ioctl(vhostfd, VHOST_GET_FEATURES, &features); > > > + if (ret) { > > > + error_setg(errp, "vhost-vsock: failed to read device freatures. > > > %s", > > > + strerror(errno)); > > > + return false; > > > + } > > > > ioctl(2) shouldn't be necessary. Let vhost detect features from the > > device backend (vhost kernel or vhost-user) and don't worry about > I did not get this part. What are the difference between vhost and > device backend? I thought vhost is the device backend? vhost kernel > is one kind of backend and vhost-user is another kind. Could you explain > more? Thanks. > > > conditionally adding the exact number of virtqueues. By "Let vhost detect features" I meant let the QEMU "hw/virtio/vhost.h" API detect the features from the backend (either vhost kernel or vhost-user). vhost_dev_init() and the other functions already check the device backend's features, so there should be no need to manually make an ioctl() call from hw/vhost-vsock.c. The purpose of the QEMU vhost API is to abstract the backend so device code like hw/vhost-vsock.c doesn't need to know whether it is communicating with a vhost kernel or vhost-user device backend.
signature.asc
Description: PGP signature