On 03/22/2010 08:30 AM, Paul Brook wrote:
A VirtIOBlock device cannot be a VirtIODevice while being a
VirtIOPCIProxy (proxy is a poor name, btw).
It really ought to be:
DeviceState -> VirtIODevice -> VirtIOBlock
and:
PCIDevice -> VirtIOPCI : implements VirtIOBus
The interface between the VirtIODevice and VirtIOBus is the virtio
transport.
The main reason a separate bus is needed is the same reason it's needed
in Linux. VirtIOBlock has to be tied to some bus. It cannot be tied to
the PCI bus without having it be part of the implementation detail.
Introducing another bus type fixes this (and it's what we do in the
kernel).
Why does virtio need a device state and bus at all?
Because you need VirtIOBlock to have qdev properties that can be set.
You also need VirtIOPCI to have separate qdev properties that can be set.
Can't it just be an internal implementation interface, which happens to be
used by all devices that happen to exposed a block device over a virtio
transport?
Theoretically, yes, but given the rest of the infrastructure's
interaction with qdev, making it a device makes the most sense IMHO.
If you have a virtio bus then IMO the PCI bridge device should be independent
of the virtio device that is connected to it.
Yes, that's the point I'm making. IOW, there shouldn't be a
"virtio-net-pci" device. Instead, there should be a "virtio-pci" device
that implements a VirtIOBus and then we add a single VirtIODevice to it
like "virtio-net".
For something like MSI vector support, virtio-net really should have no
knowledge of MSI-x. Instead, you should specific nvectors to virtio-pci
and then virtio-pci should decide how to tie individual queue
notifications to the amount of MSI vectors it has.
I can't envision any reason why we would ever want to have two MSI
vectors for a given queue.
Regards,
Anthony Liguori
Paul