On 2024/07/31 18:34, Yui Washizu wrote:
On 2024/07/15 14:15, Akihiko Odaki wrote:
On 2024/05/16 11:00, Yui Washizu wrote:
On 2024/04/28 18:05, Akihiko Odaki wrote:
Based-on: <20240315-reuse-v9-0-67aa69af4...@daynix.com>
("[PATCH for 9.1 v9 00/11] hw/pci: SR-IOV related fixes and
improvements")
Introduction
------------
This series is based on the RFC series submitted by Yui Washizu[1].
See also [2] for the context.
This series enables SR-IOV emulation for virtio-net. It is useful
to test SR-IOV support on the guest, or to expose several vDPA devices
in a VM. vDPA devices can also provide L2 switching feature for
offloading though it is out of scope to allow the guest to configure
such a feature.
The PF side code resides in virtio-pci. The VF side code resides in
the PCI common infrastructure, but it is restricted to work only for
virtio-net-pci because of lack of validation.
User Interface
--------------
A user can configure a SR-IOV capable virtio-net device by adding
virtio-net-pci functions to a bus. Below is a command line example:
-netdev user,id=n -netdev user,id=o
-netdev user,id=p -netdev user,id=q
-device pcie-root-port,id=b
-device virtio-net-pci,bus=b,addr=0x0.0x3,netdev=q,sriov-pf=f
-device virtio-net-pci,bus=b,addr=0x0.0x2,netdev=p,sriov-pf=f
-device virtio-net-pci,bus=b,addr=0x0.0x1,netdev=o,sriov-pf=f
-device virtio-net-pci,bus=b,addr=0x0.0x0,netdev=n,id=f
The VFs specify the paired PF with "sriov-pf" property. The PF must be
added after all VFs. It is user's responsibility to ensure that VFs
have
function numbers larger than one of the PF, and the function numbers
have a consistent stride.
I tried to start a VM with more than 8 VFs allocated using your patch,
but the following error occured and qemu didn't work:
VF function number overflows.
I think the cause of this error is that virtio-net-pci PFs don't have
ARI.
(pcie_ari_init is not added to virtio-net-pci when PFs are initialized.)
I think it is possible to add it later,
but how about adding pcie_ari_init ?
As a trial,
adding pcie_ari_init to virtio_pci_realize enabled the creation of
more than 8 VFs.
I have just looked into that possibility, but adding pcie_ari_init to
virtio_pci_realize has some implications. Unconditionally calling
pcie_ari_init will break the existing configuration of virtio-pci
devices so we need to implement some logic to detect when ARI is
needed. Preferably such logic should be implemented in the common PCI
infrastructure instead of implementing it in virtio-pci so that other
PCI multifunction devices can benefit from it.
While I don't think implementing this will be too complicated, I need
to ensure that such a feature is really needed before doing so.
OK.
I want to use this emulation for offloading virtual network
in a environment where there are many containers in VMs.
So, I consider that the feature is need.
I think that 7 VFs are too few.
I'll keep thinking about the feature's necessity.
I understand there could be many containers in VMs, but will a single
device deal with them? If the virtio-net VFs are backed by the vDPA
capability of one physical device, it will not have VFs more than that.
The VMs must have several PFs individually paired with VFs to
accommodate more containers on one VM.
I don't know much about vDPA-capable device, but as a reference, igb
only has 8 VFs.
I'll add other comments to RFC v5 patch.
The RFC tag is already dropped.
Regards,
Akihiko Odaki