Hi,
On 15/11/2023 11:26, Sergiy Kibrik wrote:
From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
This is needed for supporting virtio-pci.
In case when the PCI Host bridge is emulated outside of Xen
(IOREQ server), we need some mechanism to intercept config space
accesses on Xen on Arm, and forward them to the emulator
(for example, virtio backend) via IOREQ request.
Unlike x86, on Arm these accesses are MMIO, there is no CFC/CF8 method
to detect which PCI device is targeted.
In order to not mix PCI passthrough with virtio-pci features we add
one more region to cover the total configuration space for all possible
host bridges which can serve virtio-pci devices for that guest.
We expose one PCI host bridge per virtio backend domain.
I am a little confused. If you expose one PCI host bridge per virtio
backend, then why can't the backend simply register the MMIO region and
do the translation itself when it receives the read/write?
To me, it only makes sense for Xen to emulate the host bridge access if
you plan to have one host bridge shared between multiple IOREQ domains
or mix with PCI pasthrough.
From my perspective, I don't expect we would have that many virtio PCI
devices. So imposing a host bridge per device emulator will mean extra
resource in the guest as well (they need to keep track of all the
hostbridge).
So in the longer run, I think we want to allow mixing PCI passthrough
and virtio-PCI (or really any emulated PCI because nothing here is
virtio specific).
For now, your approach would be OK to enable virtio PCI on Xen. But I
don't think there are any changes necessary in Xen other than reserving
some MMIO regions/IRQ.
Cheers,
--
Julien Grall