On Thu, Aug 25, 2022 at 8:38 AM Si-Wei Liu <si-wei....@oracle.com> wrote: > > > > On 8/23/2022 9:27 PM, Jason Wang wrote: > > > > 在 2022/8/20 01:13, Eugenio Pérez 写道: > >> It was returned as error before. Instead of it, simply update the > >> corresponding field so qemu can send it in the migration data. > >> > >> Signed-off-by: Eugenio Pérez <epere...@redhat.com> > >> --- > > > > > > Looks correct. > > > > Adding Si Wei for double check. > Hmmm, I understand why this change is needed for live migration, but > this would easily cause userspace out of sync with the kernel for other > use cases, such as link down or userspace fallback due to vdpa ioctl > error. Yes, these are edge cases.
Considering 7.2 will start, maybe it's time to fix the root cause instead of having a workaround like this? THanks > Not completely against it, but I > wonder if there's a way we can limit the change scope to live migration > case only? > > -Siwei > > > > > Thanks > > > > > >> hw/net/virtio-net.c | 17 ++++++----------- > >> 1 file changed, 6 insertions(+), 11 deletions(-) > >> > >> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > >> index dd0d056fde..63a8332cd0 100644 > >> --- a/hw/net/virtio-net.c > >> +++ b/hw/net/virtio-net.c > >> @@ -1412,19 +1412,14 @@ static int virtio_net_handle_mq(VirtIONet *n, > >> uint8_t cmd, > >> return VIRTIO_NET_ERR; > >> } > >> - /* Avoid changing the number of queue_pairs for vdpa device in > >> - * userspace handler. A future fix is needed to handle the mq > >> - * change in userspace handler with vhost-vdpa. Let's disable > >> - * the mq handling from userspace for now and only allow get > >> - * done through the kernel. Ripples may be seen when falling > >> - * back to userspace, but without doing it qemu process would > >> - * crash on a recursive entry to virtio_net_set_status(). > >> - */ > >> + n->curr_queue_pairs = queue_pairs; > >> if (nc->peer && nc->peer->info->type == > >> NET_CLIENT_DRIVER_VHOST_VDPA) { > >> - return VIRTIO_NET_ERR; > >> + /* > >> + * Avoid updating the backend for a vdpa device: We're only > >> interested > >> + * in updating the device model queues. > >> + */ > >> + return VIRTIO_NET_OK; > >> } > >> - > >> - n->curr_queue_pairs = queue_pairs; > >> /* stop the backend before changing the number of queue_pairs > >> to avoid handling a > >> * disabled queue */ > >> virtio_net_set_status(vdev, vdev->status); > > >