On 17.12.20 18:55, Alex Williamson wrote: > On Wed, 16 Dec 2020 15:11:54 +0100 > David Hildenbrand <da...@redhat.com> wrote: > >> Let's query the maximum number of DMA mappings by querying the available >> mappings when creating the container. >> >> In addition, count the number of DMA mappings and warn when we would >> exceed it. This is a preparation for RamDiscardMgr which might >> create quite some DMA mappings over time, and we at least want to warn >> early that the QEMU setup might be problematic. Use "reserved" >> terminology, so we can use this to reserve mappings before they are >> actually created. > > This terminology doesn't make much sense to me, we're not actually > performing any kind of reservation.
I see you spotted the second user which actually performs reservations. > >> Note: don't reserve vIOMMU DMA mappings - using the vIOMMU region size >> divided by the mapping page size might be a bad indication of what will >> happen in practice - we might end up warning all the time. > > This suggests we're not really tracking DMA "reservations" at all. > Would something like dma_regions_mappings be a more appropriate > identifier for the thing you're trying to count? We might as well also Right now I want to count - Mappings we know we will definitely have (counted in this patch) - Mappings we know we could eventually have later (RamDiscardMgr) > keep a counter for dma_iommu_mappings where the sum of those two should > stay below dma_max_mappings. We could, however, tracking active IOMMU mappings when removing a memory region from the address space isn't easily possible - we do a single vfio_dma_unmap() which might span multiple mappings. Same applies to RamDiscardMgr. Hard to count how many mappings we actually *currently* have using that approach. > >> Cc: Paolo Bonzini <pbonz...@redhat.com> >> Cc: "Michael S. Tsirkin" <m...@redhat.com> >> Cc: Alex Williamson <alex.william...@redhat.com> >> Cc: Dr. David Alan Gilbert <dgilb...@redhat.com> >> Cc: Igor Mammedov <imamm...@redhat.com> >> Cc: Pankaj Gupta <pankaj.gupta.li...@gmail.com> >> Cc: Peter Xu <pet...@redhat.com> >> Cc: Auger Eric <eric.au...@redhat.com> >> Cc: Wei Yang <richard.weiy...@linux.alibaba.com> >> Cc: teawater <teawat...@linux.alibaba.com> >> Cc: Marek Kedzierski <mkedz...@redhat.com> >> Signed-off-by: David Hildenbrand <da...@redhat.com> >> --- >> hw/vfio/common.c | 34 ++++++++++++++++++++++++++++++++++ >> include/hw/vfio/vfio-common.h | 2 ++ >> 2 files changed, 36 insertions(+) >> >> diff --git a/hw/vfio/common.c b/hw/vfio/common.c >> index 6ff1daa763..5ad88d476f 100644 >> --- a/hw/vfio/common.c >> +++ b/hw/vfio/common.c >> @@ -288,6 +288,26 @@ const MemoryRegionOps vfio_region_ops = { >> }, >> }; >> >> +static void vfio_container_dma_reserve(VFIOContainer *container, >> + unsigned long dma_mappings) >> +{ >> + bool warned = container->dma_reserved > container->dma_max; >> + >> + container->dma_reserved += dma_mappings; >> + if (!warned && container->dma_max && >> + container->dma_reserved > container->dma_max) { >> + warn_report("%s: possibly running out of DMA mappings. " >> + " Maximum number of DMA mappings: %d", __func__, >> + container->dma_max); > > If we kept track of all the mappings we could predict better than > "possibly". Tracing support to track a high water mark might be useful > too. It's an early warning for reservations e.g., on fundamental setup issues with virtio-mem. [...] >> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h >> index 6141162d7a..fed0e85f66 100644 >> --- a/include/hw/vfio/vfio-common.h >> +++ b/include/hw/vfio/vfio-common.h >> @@ -88,6 +88,8 @@ typedef struct VFIOContainer { >> uint64_t dirty_pgsizes; >> uint64_t max_dirty_bitmap_size; >> unsigned long pgsizes; >> + unsigned int dma_max; >> + unsigned long dma_reserved; > > If dma_max is unsigned int, why do we need an unsigned long to track > how many are in use? Thanks, My thinking was that some vfio IOMMU types don't have such a limit (dma_max == -1) and could allow for more. But thinking again, such a huge number of mappings is highly unrealistic :) -- Thanks, David / dhildenb