Hi, I have created an experimental setup for Linux where all the virtio data structures and traffic can be allocated by the guest from a ram blob outside of the guest default ram space. That ram blob can be hotplugged to the guest or defined via the guests device tree.This is done as some hypervisors, including tdx/sev/pkvm and others, would probably benefit from a simple security policy that removes all set_memory_{encrypted,decrypted} calls to open up the guest dma memory in fragments that are not only likely to leak information due to the widespread use of the DMA API but also slow things down for no obvious reason. From the hypervisors point of view the fragmented shadow page table space is also an unnecessary slowdown and a source of memory waste.
I have seen forks of SWIOTLB that do similar things, but fundamentally they are still SWIOTLB behind the curtains and as such unusable for low latency / high bandwidth applications due to bouncing (copying) data back and forth into those external buffers. The setup I have created can act as virtio as it was designed to be, a zero copy data transport path. A trial integration into QEMU could probably look something like this (in virt.c): .. emem_map = mmap(NULL, EMEM_SIZE, PROT_READ|PROT_WRITE, MAP_SHARED | MAP_SYNC, fd, 0); memory_region_init_ram_ptr(sr, OBJECT(machine), "ext-mem", EMEM_SIZE, emem_map); .. emem = g_new(MemoryRegion, 1); memory_region_add_subregion_overlap(sysmem, emem_physaddr, emem, 1000); .. So the question I have is that did I understand the qemu RAM model correctly and would something like that lead to known issues somewhere? -- Janne