On Tuesday, 2021-06-22 at 15:16:29 -06, Alex Williamson wrote: >> Additionally, an alternative to hardcoded ranges as we do today, >> VFIO could advertise the platform valid IOVA ranges without >> necessarily >> requiring to have a PCI device added in the vfio container. That >> would >> fetching the valid IOVA ranges from VFIO, rather than hardcoded IOVA >> ranges as we do today. But sadly, wouldn't work for older >> hypervisors. > > > $ grep -h . /sys/kernel/iommu_groups/*/reserved_regions | sort -u > 0x00000000fee00000 0x00000000feefffff msi > 0x000000fd00000000 0x000000ffffffffff reserved > > Ideally we might take that into account on all hosts, but of course > then we run into massive compatibility issues when we consider > migration. We run into similar problems when people try to assign > devices to non-x86 TCG hosts, where the arch doesn't have a natural > memory hole overlapping the msi range. > > The issue here is similar to trying to find a set of supported CPU > flags across hosts, QEMU only has visibility to the host where it runs, > an upper level tool needs to be able to pass through information about > compatibility to all possible migration targets. Towards that end, we > should probably have command line options that either allow to specify > specific usable or reserved GPA address ranges. For example something > like: > --reserved-mem-ranges=host > > Or explicitly: > > --reserved-mem-ranges=13G@1010G,1M@4078M
Would this not naturally be a property of a machine model? dme. -- Seems I'm not alone at being alone.