On Wed, Nov 27, 2024 at 11:50 AM David Hildenbrand <da...@redhat.com> wrote:
>
>
> >> RAM memory region/ RAMBlock that has properly set flags/fd/whatssoever
> >> and map whatever you want in there.
> >>
> >> Likely you would need a distinct RAMBlock/RAM memory region per mmap(),
> >> and would end up mmaping implicitly via qemu_ram_mmap().
> >>
> >> Then, your shared region would simply be an empty container into which
> >> you map these RAM memory regions.
> >
>
> Hi,
>
> > Hi, sorry it took me so long to get back to this. Lately I have been
> > testing the patch and fixing bugs, and I am was going to add some
> > tests to be able to verify the patch without having to use a backend
> > (which is what I am doing right now).
> >
> > But I wanted to address/discuss this comment. I am not sure of the
> > actual problem with the current approach (I am not completely aware of
> > the concern in your first paragraph), but I see other instances where
> > qemu mmaps stuff into a MemoryRegion.
>
> I suggest you take a look at the three relevant MAP_FIXED users outside
> of user emulation code.
>
> (1) hw/vfio/helpers.c: We create a custom memory region + RAMBlock with
>      memory_region_init_ram_device_ptr(). This doesn't mmap(MAP_FIXED)
>      into any existing RAMBlock.
>
> (2) system/physmem.c: I suggest you take a close look at
>      qemu_ram_remap() and how it is one example of how RAMBlock
>      properties describe exactly what is mmaped.
>
> (3) util/mmap-alloc.c: Well, this is the code that performs the mmap(),
>      to bring a RAMBlock to life. See qemu_ram_mmap().
>
> There is some oddity hw/xen/xen-mapcache.c; XEN mapcache seems to manage
> guest RAM without RAMBlocks.
>
> > Take into account that the
> > implementation follows the definition of shared memory region here:
> > https://docs.oasis-open.org/virtio/virtio/v1.3/csd01/virtio-v1.3-csd01.html#x1-10200010
> > Which hints to a memory region per ID, not one per required map. So
> > the current strategy seems to fit it better.
>
> I'm confused, we are talking about an implementation detail here? How is
> that related to the spec?

What I try to say is that for me, conceptually, sounds weird to
implement it (as per your suggestion) as a ram block per mmap(); when
the concept we are trying to implement is multiple ram blocks, with
different IDs, and each accepting multiple mmap() as long as there is
enough free space. I fail to see how having multiple IDs for different
shared memory block translates to the implementation when each mmap()
gets its own RAMBlock. Or how the offset the device requests for the
mmap has a meaning when the base memory will change for different
RAMBlocks? But I do not mean as it your suggestion is wrong, I
definitively need to review this more in depth, as my understanding of
this code in QEMU is relatively limited still.

>
> >
> > Also, I was aware that I was not the first one attempting this, so I
> > based this code in previous attempts (maybe I should give credit in
> > the commit now that I think of):
> > https://gitlab.com/virtio-fs/qemu/-/blob/qemu5.0-virtiofs-dax/hw/virtio/vhost-user-fs.c?ref_type=heads#L75
> > As you can see, it pretty much follows the same strategy.
>
> So, people did some hacky things in a QEMU fork 6 years ago ... :) That
> cannot possibly be a good argument why we should have it like that in QEMU.

Fair. I just took those patches as a good source for what I was trying
to do, as people involved had (and have, for the most part :) ) better
idea of QEMU internals that I do. But it is true that that does not
make it a source of truth.

>
> > And in my
> > examples I have been able to use this to video stream with multiple
> > queues mapped into the shared memory (used to capture video frames),
> > using the backend I mentioned above for testing. So the concept works.
> > I may be wrong with this, but for what I understood looking at the
> > code, crosvm uses a similar strategy. Reserve a memory block and use
> > for all your mappings, and use an allocator to find a free slot.
> >
>
> Again, I suggest you take a look at what a RAMBlock is, and how it's
> properties describe the containing mmap().
>
> > And if I were to do what you say, those distinct RAMBlocks should be
> > created when the device starts? What would be their size? Should I
> > create them when qemu receives a request to mmap? How would the driver
> > find the RAMBlock?
>
> You'd have an empty memory region container into which you will map
> memory regions that describe the memory you want to share.
>
> mr = g_new0(MemoryRegion, 1);
> memory_region_init(mr, OBJECT(TODO), "vhost-user-shm", region_size);
>
>
> Assuming you are requested to mmap an fd, you'd create a new
> MemoryRegion+RAMBlock that describes the memory and performs the mmap()
> for you:
>
> map_mr = g_new0(MemoryRegion, 1);
> memory_region_init_ram_from_fd(map_mr, OBJECT(TODO), "TODO", map_size,
>                                RAM_SHARED, map_fd, map_offs, errp);
>
> To then map it into your container:
>
> memory_region_add_subregion(mr, offset_within_container, map_mr);
>
>
> To unmap, you'd first remove the subregion, to then unref the map_mr.
>
>
>
> The only alternative would be to do it like (1) above: you perform all
> of the mmap() yourself, and create it using
> memory_region_init_ram_device_ptr(). This will set RAM_PREALLOC on the
> RAMBlock and tell QEMU "this is special, just disregard it". The bad
> thing about RAM_PREALLOC will be that it will not be compatible with
> vfio, not communicated to other vhost-user devices etc ... whereby what
> I describe above would just work with them.

OK. Given that I have a device with which to test, I think it is
definitively worth trying to implement this approach and see how it
works. I'll respond to this thread with progress/results before
sending the next version.

And thanks for the explanation!

BR,
Albert.

>
> --
> Cheers,
>
> David / dhildenb
>


Reply via email to