On Tue, Jan 25, 2022 at 01:49:23PM +0000, Jag Raman wrote: > > > > On Jan 25, 2022, at 4:56 AM, Stefan Hajnoczi <stefa...@redhat.com> wrote: > > > > On Wed, Jan 19, 2022 at 04:41:52PM -0500, Jagannathan Raman wrote: > >> Allow PCI buses to be part of isolated CPU address spaces. This has a > >> niche usage. > >> > >> TYPE_REMOTE_MACHINE allows multiple VMs to house their PCI devices in > >> the same machine/server. This would cause address space collision as > >> well as be a security vulnerability. Having separate address spaces for > >> each PCI bus would solve this problem. > >> > >> Signed-off-by: Elena Ufimtseva <elena.ufimts...@oracle.com> > >> Signed-off-by: John G Johnson <john.g.john...@oracle.com> > >> Signed-off-by: Jagannathan Raman <jag.ra...@oracle.com> > >> --- > >> include/hw/pci/pci.h | 2 ++ > >> include/hw/pci/pci_bus.h | 17 +++++++++++++++++ > >> hw/pci/pci.c | 17 +++++++++++++++++ > >> hw/pci/pci_bridge.c | 5 +++++ > >> 4 files changed, 41 insertions(+) > >> > >> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > >> index 023abc0f79..9bb4472abc 100644 > >> --- a/include/hw/pci/pci.h > >> +++ b/include/hw/pci/pci.h > >> @@ -387,6 +387,8 @@ void pci_device_save(PCIDevice *s, QEMUFile *f); > >> int pci_device_load(PCIDevice *s, QEMUFile *f); > >> MemoryRegion *pci_address_space(PCIDevice *dev); > >> MemoryRegion *pci_address_space_io(PCIDevice *dev); > >> +AddressSpace *pci_isol_as_mem(PCIDevice *dev); > >> +AddressSpace *pci_isol_as_io(PCIDevice *dev); > >> > >> /* > >> * Should not normally be used by devices. For use by sPAPR target > >> diff --git a/include/hw/pci/pci_bus.h b/include/hw/pci/pci_bus.h > >> index 347440d42c..d78258e79e 100644 > >> --- a/include/hw/pci/pci_bus.h > >> +++ b/include/hw/pci/pci_bus.h > >> @@ -39,9 +39,26 @@ struct PCIBus { > >> void *irq_opaque; > >> PCIDevice *devices[PCI_SLOT_MAX * PCI_FUNC_MAX]; > >> PCIDevice *parent_dev; > >> + > >> MemoryRegion *address_space_mem; > >> MemoryRegion *address_space_io; > > > > This seems like a good point to rename address_space_mem, > > address_space_io, as well as PCIIORegion->address_space since they are > > all MemoryRegions and not AddressSpaces. Names could be > > mem_space_mr/io_space_mr and PCIIORegion->container_mr. This avoids > > confusion with the actual AddressSpaces that are introduced in this > > patch. > > Are you referring to renaming address_space_mem, address_space_io and > PCIIORegion->address_space alone? I’m asking because there are many > other symbols in the code which are named similarly.
I only see those symbols in hw/pci/pci.c. Which ones were you thinking about? > > > >> > >> + /** > >> + * Isolated address spaces - these allow the PCI bus to be part > >> + * of an isolated address space as opposed to the global > >> + * address_space_memory & address_space_io. This allows the > >> + * bus to be attached to CPUs from different machines. The > >> + * following is not used used commonly. > >> + * > >> + * TYPE_REMOTE_MACHINE allows emulating devices from multiple > >> + * VM clients, as such it needs the PCI buses in the same machine > >> + * to be part of different CPU address spaces. The following is > >> + * useful in that scenario. > >> + * > >> + */ > >> + AddressSpace *isol_as_mem; > >> + AddressSpace *isol_as_io; > > > > Or use the pointers unconditionally and initialize them to the global > > address_space_memory/address_space_io? That might simplify the code so > > isolated address spaces is no longer a special case. > > I did start off with using these pointers unconditionally - but adopted an > optional > isolated address space for the following reasons: > - There is a potential for regression > - CPU address space per bus is not a common scenario. In most case, all PCI > buses are attached to CPU sharing the same address space. Therefore, an > optional address space made sense for this special scenario > > We can also set it unconditionally if you prefer, kindly confirm. It's a matter of taste. I don't have a strong opinion on it but personally I would try to make it unconditional. I think the risk of regressions is low and the code complexity will be lower than making it a special case. If you wanted to keep it as is, that's fine. > > > > > isol_as_io isn't used by this patch? > > This patch introduces these variables, defines its getters and sets them to > NULL in > places where new PCI buses are presently created. The following patch creates > a > separate isolated address space: > [PATCH v5 04/18] pci: create and free isolated PCI buses > > I could merge these patches if you prefer. The only place I saw that reads isol_as_io is "[PATCH v5 15/18] vfio-user: handle PCI BAR accesses", but that's for PCI I/O Space accesses. Did I miss where I/O Space BARs are mapped into isol_as_io? Stefan
signature.asc
Description: PGP signature