> On Mar 7, 2022, at 4:45 AM, Stefan Hajnoczi <stefa...@redhat.com> wrote: > > On Thu, Mar 03, 2022 at 02:49:53PM +0000, Jag Raman wrote: >> >> >>> On Mar 2, 2022, at 11:49 AM, Stefan Hajnoczi <stefa...@redhat.com> wrote: >>> >>> On Mon, Feb 28, 2022 at 07:54:38PM +0000, Jag Raman wrote: >>>> >>>> >>>>> On Feb 22, 2022, at 5:40 AM, Stefan Hajnoczi <stefa...@redhat.com> wrote: >>>>> >>>>> On Thu, Feb 17, 2022 at 02:48:59AM -0500, Jagannathan Raman wrote: >>>>>> +struct RemoteIommuElem { >>>>>> + AddressSpace as; >>>>>> + MemoryRegion mr; >>>>>> +}; >>>>>> + >>>>>> +GHashTable *remote_iommu_elem_by_bdf; >>>>> >>>>> A mutable global hash table requires synchronization when device >>>>> emulation runs in multiple threads. >>>>> >>>>> I suggest using pci_setup_iommu()'s iommu_opaque argument to avoid the >>>>> global. If there is only 1 device per remote PCI bus, then there are no >>>>> further synchronization concerns. >>>> >>>> OK, will avoid the global. We would need to access the hash table >>>> concurrently since there could be more than one device in the >>>> same bus - so a mutex would be needed here. >>> >>> I thought the PCIe topology can be set up with a separate buf for each >>> x-vfio-user-server? I remember something like that in the previous >>> revision where a root port was instantiated for each x-vfio-user-server. >> >> Yes, we could setup the PCIe topology to be that way. But the user could >> add more than one device to the same bus, unless the bus type explicitly >> limits the number of devices to one (BusClass->max_dev). > > Due to how the IOMMU is used to restrict the bus to the vfio-user > client's DMA mappings, it seems like it's necesssary to limit the number > of devices to 1 per bus anyway?
Hi Stefan, “remote_iommu_elem_by_bdf” has a separate entry for each of the BDF combinations - it provides a separate DMA address space per device. As such, we don’t have to limit the number of devices to 1 per bus. Thank you! -- Jag > > Stefan