On 08/31/2016 08:30 PM, Marc-André Lureau wrote:
Hi
On Sun, Jun 19, 2016 at 10:19 AM Wei Wang <wei.w.w...@intel.com
<mailto:wei.w.w...@intel.com>> wrote:
This RFC proposes a design of vhost-pci, which is a new virtio
device type.
The vhost-pci device is used for inter-VM communication.
Before I send a more complete review of the spec, I have a few overall
questions:
Hi Marc-André, thanks for joining the reviewing process :)
- this patch is for the virtio spec? Why not patch the spec directly
(https://tools.oasis-open.org/version-control/browse/wsvn/virtio/trunk/)
I expect several rfc iterations, so perhaps it's easier as plain text
file for now (as a qemu patch to doc/specs). btw, I would limit the
audience at qemu-devel for now.
Yes. A part of the patch is for the virtio spec. I will separate the
patches (please see the next response).
I have the qemu-devel and virtio mailinglist kept here.
- I think the virtio spec should limit itself to the hw device
description, and virtioq messages. Not the backend implementation (the
ipc details, client/server etc).
Agree. I will separate the device spec description from the protocol
description. The device description will be made a virtio spec patch,
and the protocol description will be made a qemu patch to doc/specs.
- If it could be made not pci-specific, a better name for the device
could be simply "driver": the driver of a virtio device. Or the
"slave" in vhost-user terminology - consumer of virtq. I think you
prefer to call it "backend" in general, but I find it more confusing.
Not really. A virtio device has it own driver (e.g. a virtio-net driver
for a virtio-net device). A vhost-pci device plays the role of a backend
(just like vhost_net, vhost_user) for a virtio device. If we use the
"device/driver" naming convention, the vhost-pci device is part of the
"device". But I actually prefer to use "frontend/backend" :) If we check
the QEMU's doc/specs/vhost-user.txt, it also uses "backend" to describe.
- regarding the socket protocol, why not reuse vhost-user? it seems to
me it supports most of what you need and more (like interrupt,
migrations, protocol features, start/stop queues). Some of the
extensions, like uuid, could be beneficial to vhost-user too.
Right. We recently changed the plan - trying to make it (the vhost-pci
protocol) an extension of the vhost-user protocol.
- Why is it required or beneficial to support multiple "frontend"
devices over the same "vhost-pci" device? It could simplify things if
it was a single device. If necessary, that could also be interesting
as a vhost-user extension.
We call it "multiple backend functionalities" (e.g. vhost-pci-net,
vhost-pci-scsi..). A vhost-pci driver contains multiple such backend
functionalities, because in this way they can reuse (share) the same
memory mapping. To be more precise, a vhost-pci device supplies the
memory of a frontend VM, and all the backend functionalities need to
access the same frontend VM memory, so we consolidate them into one
vhost-pci driver to use one vhost-pci device.
- no interrupt support, I suppose you mainly looked at poll-based net
devices
Yes. But I think it's also possible to add the interrupt support. For
example, we can use ioeventfd (or hypercall) to inject interrupts to the
fontend VM after transmitting packets.
- when do you expect to share a wip/rfc implementation?
Probably in October (next month). I think it also depends on the
discussions here :)
Best,
Wei