RAM memory region/ RAMBlock that has properly set flags/fd/whatssoever
and map whatever you want in there.
Likely you would need a distinct RAMBlock/RAM memory region per mmap(),
and would end up mmaping implicitly via qemu_ram_mmap().
Then, your shared region would simply be an empty container into which
you map these RAM memory regions.
Hi,
Hi, sorry it took me so long to get back to this. Lately I have been
testing the patch and fixing bugs, and I am was going to add some
tests to be able to verify the patch without having to use a backend
(which is what I am doing right now).
But I wanted to address/discuss this comment. I am not sure of the
actual problem with the current approach (I am not completely aware of
the concern in your first paragraph), but I see other instances where
qemu mmaps stuff into a MemoryRegion.
I suggest you take a look at the three relevant MAP_FIXED users outside
of user emulation code.
(1) hw/vfio/helpers.c: We create a custom memory region + RAMBlock with
memory_region_init_ram_device_ptr(). This doesn't mmap(MAP_FIXED)
into any existing RAMBlock.
(2) system/physmem.c: I suggest you take a close look at
qemu_ram_remap() and how it is one example of how RAMBlock
properties describe exactly what is mmaped.
(3) util/mmap-alloc.c: Well, this is the code that performs the mmap(),
to bring a RAMBlock to life. See qemu_ram_mmap().
There is some oddity hw/xen/xen-mapcache.c; XEN mapcache seems to manage
guest RAM without RAMBlocks.
Take into account that the
implementation follows the definition of shared memory region here:
https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01.html#x1-10200010
Which hints to a memory region per ID, not one per required map. So
the current strategy seems to fit it better.
I'm confused, we are talking about an implementation detail here? How is
that related to the spec?
Also, I was aware that I was not the first one attempting this, so I
based this code in previous attempts (maybe I should give credit in
the commit now that I think of):
https://gitlab.com/virtio-fs/qemu/-/blob/qemu5.0-virtiofs-dax/hw/virtio/vhost-user-fs.c?ref_type=heads#L75
As you can see, it pretty much follows the same strategy.
So, people did some hacky things in a QEMU fork 6 years ago ... :) That
cannot possibly be a good argument why we should have it like that in QEMU.
And in my
examples I have been able to use this to video stream with multiple
queues mapped into the shared memory (used to capture video frames),
using the backend I mentioned above for testing. So the concept works.
I may be wrong with this, but for what I understood looking at the
code, crosvm uses a similar strategy. Reserve a memory block and use
for all your mappings, and use an allocator to find a free slot.
Again, I suggest you take a look at what a RAMBlock is, and how it's
properties describe the containing mmap().
And if I were to do what you say, those distinct RAMBlocks should be
created when the device starts? What would be their size? Should I
create them when qemu receives a request to mmap? How would the driver
find the RAMBlock?
You'd have an empty memory region container into which you will map
memory regions that describe the memory you want to share.
mr = g_new0(MemoryRegion, 1);
memory_region_init(mr, OBJECT(TODO), "vhost-user-shm", region_size);
Assuming you are requested to mmap an fd, you'd create a new
MemoryRegion+RAMBlock that describes the memory and performs the mmap()
for you:
map_mr = g_new0(MemoryRegion, 1);
memory_region_init_ram_from_fd(map_mr, OBJECT(TODO), "TODO", map_size,
RAM_SHARED, map_fd, map_offs, errp);
To then map it into your container:
memory_region_add_subregion(mr, offset_within_container, map_mr);
To unmap, you'd first remove the subregion, to then unref the map_mr.
The only alternative would be to do it like (1) above: you perform all
of the mmap() yourself, and create it using
memory_region_init_ram_device_ptr(). This will set RAM_PREALLOC on the
RAMBlock and tell QEMU "this is special, just disregard it". The bad
thing about RAM_PREALLOC will be that it will not be compatible with
vfio, not communicated to other vhost-user devices etc ... whereby what
I describe above would just work with them.
--
Cheers,
David / dhildenb