On Wed, Mar 30, 2022 at 05:08:24PM +0100, Stefan Hajnoczi wrote: > On Wed, Mar 30, 2022 at 08:53:16AM -0400, Peter Xu wrote: > > On Wed, Mar 30, 2022 at 11:04:24AM +0100, Stefan Hajnoczi wrote: > > > This makes me wonder whether there is a deeper issue with the > > > pci_setup_iommu() API: the lack of per-device cleanup callbacks. > > > Per-device IOMMU resources should be freed when a device is hot > > > unplugged. > > > > > > From what I can tell this is not the case today: > > > > > > - hw/i386/intel_iommu.c:vtd_find_add_as() allocates and adds device > > > address spaces but I can't find where they are removed and freed. > > > VTDAddressSpace instances pointed to from vtd_bus->dev_as[] are leaked. > > > > > > - hw/i386/amd_iommu.c has similar leaks. > > > > AFAICT it's because there's no device-specific data cached in the > > per-device IOMMU address space, at least so far. IOW, all the data > > structures allocated here can be re-used when a new device is plugged in > > after the old device unplugged. > > > > It's definitely not ideal since after unplug (and before a new device > > plugged in) the resource is not needed at all so it's kind of wasted, but > > it should work functionally. If to achieve that, some iommu_unplug() or > > iommu_cleanup() hook sounds reasonable. > > I guess the question is whether PCI busses can be hotplugged with > IOMMUs. If yes, then there is a memory leak that matters for > intel_iommu.c and amd_iommu.c.
It can't, and we only support one vIOMMU so far for both (commit 1b3bf13890fd849b26). Thanks, -- Peter Xu