在 2022/7/27 12:51, Jason Wang 写道:
On Tue, Jul 26, 2022 at 1:27 PM Kangjie Xu <kangjie...@linux.alibaba.com> wrote:
在 2022/7/26 12:07, Jason Wang 写道:
在 2022/7/18 19:17, Kangjie Xu 写道:
Implement the vhost_set_single_vring_enable, which is to enable or
disable a single vring.
The parameter wait_for_reply is added to help for some cases such as
vq reset.
Meanwhile, vhost_user_set_vring_enable() is refactored.
Signed-off-by: Kangjie Xu <kangjie...@linux.alibaba.com>
Signed-off-by: Xuan Zhuo <xuanz...@linux.alibaba.com>
---
hw/virtio/vhost-user.c | 55 ++++++++++++++++++++++++++++++++++++------
1 file changed, 48 insertions(+), 7 deletions(-)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 75b8df21a4..5a80a415f0 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -267,6 +267,8 @@ struct scrub_regions {
int fd_idx;
};
+static int enforce_reply(struct vhost_dev *dev, const VhostUserMsg
*msg);
+
static bool ioeventfd_enabled(void)
{
return !kvm_enabled() || kvm_eventfds_enabled();
@@ -1198,6 +1200,49 @@ static int vhost_user_set_vring_base(struct
vhost_dev *dev,
return vhost_set_vring(dev, VHOST_USER_SET_VRING_BASE, ring);
}
+
+static int vhost_user_set_single_vring_enable(struct vhost_dev *dev,
+ int index,
+ int enable,
+ bool wait_for_reply)
+{
+ int ret;
+
+ if (index < dev->vq_index || index >= dev->vq_index + dev->nvqs) {
+ return -EINVAL;
+ }
+
+ struct vhost_vring_state state = {
+ .index = index,
+ .num = enable,
+ };
+
+ VhostUserMsg msg = {
+ .hdr.request = VHOST_USER_SET_VRING_ENABLE,
+ .hdr.flags = VHOST_USER_VERSION,
+ .payload.state = state,
+ .hdr.size = sizeof(msg.payload.state),
+ };
+
+ bool reply_supported = virtio_has_feature(dev->protocol_features,
+ VHOST_USER_PROTOCOL_F_REPLY_ACK);
+
+ if (reply_supported && wait_for_reply) {
+ msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
+ }
Do we need to fail if !realy_supported && wait_for_reply?
Thanks
I guess you mean "should we fail if VHOST_USER_PROTOCOL_F_REPLY_ACK
feature is not supported?".
The implementation here is similar to that in vhost_user_set_vring_addr().
If this feature is not supported, it will call enforce_reply(), then
call vhost_user_get_features() to get a reply.
Ok, so you meant we can then fallback to VHOST_USER_GET_FEATURES? I
wonder how robust is this, e.g is the bakcend required to process
vhost-user request in order?
Thanks
Yes, we rely on VHOST_USER_GET_FEATURES message to ensure that
VHOST_USER_SET_VRING_ENABLE has been processed.
It's not robust. I reviewed the vhost-user protocol in qemu doc,
actually it does not specify that the backend should process them in order.
I think adding a new vhost protocol message can fix this issue. The new
invented message should reset the queue, and have a blocked read to
ensure the message has been processed.
Thanks
Since the messages will be processed sequentailly in DPDK, success of
enforce_reply() means the previous message VHOST_USER_SET_VRING_ENABLE
has been processed.
Thanks
+
+ ret = vhost_user_write(dev, &msg, NULL, 0);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (wait_for_reply) {
+ return enforce_reply(dev, &msg);
+ }
+
+ return ret;
+}
+
static int vhost_user_set_vring_enable(struct vhost_dev *dev, int
enable)
{
int i;
@@ -1207,13 +1252,8 @@ static int vhost_user_set_vring_enable(struct
vhost_dev *dev, int enable)
}
for (i = 0; i < dev->nvqs; ++i) {
- int ret;
- struct vhost_vring_state state = {
- .index = dev->vq_index + i,
- .num = enable,
- };
-
- ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE,
&state);
+ int ret = vhost_user_set_single_vring_enable(dev,
dev->vq_index + i,
+ enable, false);
if (ret < 0) {
/*
* Restoring the previous state is likely infeasible,
as well as
@@ -2627,6 +2667,7 @@ const VhostOps user_ops = {
.vhost_set_owner = vhost_user_set_owner,
.vhost_reset_device = vhost_user_reset_device,
.vhost_get_vq_index = vhost_user_get_vq_index,
+ .vhost_set_single_vring_enable =
vhost_user_set_single_vring_enable,
.vhost_set_vring_enable = vhost_user_set_vring_enable,
.vhost_requires_shm_log = vhost_user_requires_shm_log,
.vhost_migration_done = vhost_user_migration_done,