On Wed, Nov 06, 2024 at 03:59:23PM -0500, Steven Sistare wrote:
> On 11/6/2024 3:41 PM, Peter Xu wrote:
> > On Wed, Nov 06, 2024 at 03:12:20PM -0500, Steven Sistare wrote:
> > > On 11/4/2024 4:36 PM, David Hildenbrand wrote:
> > > > On 04.11.24 21:56, Steven Sistare wrote:
> > > > > On 11/4/2024 3:15 PM, David Hildenbrand wrote:
> > > > > > On 04.11.24 20:51, David Hildenbrand wrote:
> > > > > > > On 04.11.24 18:38, Steven Sistare wrote:
> > > > > > > > On 11/4/2024 5:39 AM, David Hildenbrand wrote:
> > > > > > > > > On 01.11.24 14:47, Steve Sistare wrote:
> > > > > > > > > > Allocate anonymous memory using mmap MAP_ANON or 
> > > > > > > > > > memfd_create depending
> > > > > > > > > > on the value of the anon-alloc machine property.  This 
> > > > > > > > > > option applies to
> > > > > > > > > > memory allocated as a side effect of creating various 
> > > > > > > > > > devices. It does
> > > > > > > > > > not apply to memory-backend-objects, whether explicitly 
> > > > > > > > > > specified on
> > > > > > > > > > the command line, or implicitly created by the -m command 
> > > > > > > > > > line option.
> > > > > > > > > > 
> > > > > > > > > > The memfd option is intended to support new migration 
> > > > > > > > > > modes, in which the
> > > > > > > > > > memory region can be transferred in place to a new QEMU 
> > > > > > > > > > process, by sending
> > > > > > > > > > the memfd file descriptor to the process.  Memory contents 
> > > > > > > > > > are preserved,
> > > > > > > > > > and if the mode also transfers device descriptors, then 
> > > > > > > > > > pages that are
> > > > > > > > > > locked in memory for DMA remain locked.  This behavior is a 
> > > > > > > > > > pre-requisite
> > > > > > > > > > for supporting vfio, vdpa, and iommufd devices with the new 
> > > > > > > > > > modes.
> > > > > > > > > 
> > > > > > > > > A more portable, non-Linux specific variant of this will be 
> > > > > > > > > using shm,
> > > > > > > > > similar to backends/hostmem-shm.c.
> > > > > > > > > 
> > > > > > > > > Likely we should be using that instead of memfd, or try 
> > > > > > > > > hiding the
> > > > > > > > > details. See below.
> > > > > > > > 
> > > > > > > > For this series I would prefer to use memfd and hide the 
> > > > > > > > details.  It's a
> > > > > > > > concise (and well tested) solution albeit linux only.  The code 
> > > > > > > > you supply
> > > > > > > > for posix shm would be a good follow on patch to support other 
> > > > > > > > unices.
> > > > > > > 
> > > > > > > Unless there is reason to use memfd we should start with the more
> > > > > > > generic POSIX variant that is available even on systems without 
> > > > > > > memfd.
> > > > > > > Factoring stuff out as I drafted does look quite compelling.
> > > > > > > 
> > > > > > > I can help with the rework, and send it out separately, so you 
> > > > > > > can focus
> > > > > > > on the "machine toggle" as part of this series.
> > > > > > > 
> > > > > > > Of course, if we find out we need the memfd internally instead 
> > > > > > > under
> > > > > > > Linux for whatever reason later, we can use that instead.
> > > > > > > 
> > > > > > > But IIUC, the main selling point for memfd are additional features
> > > > > > > (hugetlb, memory sealing) that you aren't even using.
> > > > > > 
> > > > > > FWIW, I'm looking into some details, and one difference is that 
> > > > > > shmem_open() under Linux (glibc) seems to go to /dev/shmem and 
> > > > > > memfd/SYSV go to the internal tmpfs mount. There is not a big 
> > > > > > difference, but there can be some difference (e.g., sizing of the 
> > > > > > /dev/shm mount).
> > > > > 
> > > > > Sizing is a non-trivial difference.  One can by default allocate all 
> > > > > memory using memfd_create.
> > > > > To do so using shm_open requires configuration on the mount.  One 
> > > > > step harder to use.
> > > > 
> > > > Yes.
> > > > 
> > > > > 
> > > > > This is a real issue for memory-backend-ram, and becomes an issue for 
> > > > > the internal RAM
> > > > > if memory-backend-ram has hogged all the memory.
> > > > > 
> > > > > > Regarding memory-backend-ram,share=on, I assume we can use memfd if 
> > > > > > available, but then fallback to shm_open().
> > > > > 
> > > > > Yes, and if that is a good idea, then the same should be done for 
> > > > > internal RAM
> > > > > -- memfd if available and fallback to shm_open.
> > > > 
> > > > Yes.
> > > > 
> > > > > 
> > > > > > I'm hoping we can find a way where it just all is rather intuitive, 
> > > > > > like
> > > > > > 
> > > > > > "default-ram-share=on": behave for internal RAM just like 
> > > > > > "memory-backend-ram,share=on"
> > > > > > 
> > > > > > "memory-backend-ram,share=on": use whatever mechanism we have to 
> > > > > > give us "anonymous" memory that can be shared using an fd with 
> > > > > > another process.
> > > > > > 
> > > > > > Thoughts?
> > > > > 
> > > > > Agreed, though I thought I had already landed at the intuitive 
> > > > > specification in my patch.
> > > > > The user must explicitly configure memory-backend-* to be usable with 
> > > > > CPR, and anon-alloc
> > > > > controls everything else.  Now we're just riffing on the details: 
> > > > > memfd vs shm_open, spelling
> > > > > of options and words to describe them.
> > > > 
> > > > Well, yes, and making it all a bit more consistent and the "machine 
> > > > option" behave just like "memory-backend-ram,share=on".
> > > 
> > > Hi David and Peter,
> > > 
> > > I have implemented and tested the following, for both qemu_memfd_create
> > > and qemu_shm_alloc.  This is pseudo-code, with error conditions omitted
> > > for simplicity.
> > 
> > I'm ok with either shm or memfd, as this feature only applies to Linux
> > anyway.  I'll leave that part to you and David to decide.
> > 
> > > 
> > > Any comments before I submit a complete patch?
> > > 
> > > ----
> > > qemu-options.hx:
> > >      ``aux-ram-share=on|off``
> > >          Allocate auxiliary guest RAM as an anonymous file that is
> > >          shareable with an external process.  This option applies to
> > >          memory allocated as a side effect of creating various devices.
> > >          It does not apply to memory-backend-objects, whether explicitly
> > >          specified on the command line, or implicitly created by the -m
> > >          command line option.
> > > 
> > >          Some migration modes require aux-ram-share=on.
> > > 
> > > qapi/migration.json:
> > >      @cpr-transfer:
> > >           ...
> > >           Memory-backend objects must have the share=on attribute, but
> > >           memory-backend-epc is not supported.  The VM must be started
> > >           with the '-machine aux-ram-share=on' option.
> > > 
> > > Define RAM_PRIVATE
> > > 
> > > Define qemu_shm_alloc(), from David's tmp patch
> > > 
> > > ram_backend_memory_alloc()
> > >      ram_flags = backend->share ? RAM_SHARED : RAM_PRIVATE;
> > >      memory_region_init_ram_flags_nomigrate(ram_flags)
> > 
> > Looks all good until here.
> > 
> > > 
> > > qemu_ram_alloc_internal()
> > >      ...
> > >      if (!host && !(ram_flags & RAM_PRIVATE) && 
> > > current_machine->aux_ram_share)
> > 
> > Nitpick: could rely on flags-only, rather than testing "!host", AFAICT
> > that's equal to RAM_PREALLOC.
> 
> IMO testing host is clearer and more future proof, regardless of how flags
> are currently used.  If the caller passes host, then we should not allocate
> memory here, full stop.
> 
> > Meanwhile I slightly prefer we don't touch
> > anything if SHARED|PRIVATE is set.
> 
> OK, if SHARED is already set I will not set it again.
> 
> > All combined, it could be:
> > 
> >      if (!(ram_flags & (RAM_PREALLOC | RAM_PRIVATE | RAM_SHARED))) {
> >          // ramblock to be allocated, with no share/private request, aka,
> >          // aux memory chunk...
> >      }
> > 
> > >          new_block->flags |= RAM_SHARED;
> > > 
> > >      if (!host && (new_block->flags & RAM_SHARED)) {
> > >          qemu_ram_alloc_shared(new_block);
> > 
> > I'm not sure whether this needs its own helper.
> 
> Reserve judgement until you see the full patch.  The helper is a
> non-trivial subroutine and IMO it improves readability.  Also the
> cpr find/save hooks are confined to the subroutine.

I thought we can use the same code path to process "aux ramblock" and all
kinds of other RAM_SHARED ramblocks.  IIUC cpr find/save should apply there
too, but maybe I missed something.

> 
> > Should we fallback to
> > ram_block_add() below, just like a RAM_SHARED?
> 
> I thought we all discussed and agreed that the allocation should be performed
> above ram_block_add.  David's suggested patch does it here also.

I was not closely followed all the discussions happened.. so I could have
missed something indeed.

One thing I want to double check is cpr will still make things like below
work, right?

  -object memory-backend-ram,share=on [1]

IIUC with the old code this won't create fd, so to make cpr work (and also
what I was trying to say in the previous email..) is we could silently
start to create memfds for these, which means we need to first teach
qemu_anon_ram_alloc() on creating memfd for RAM_SHARED and cache these fds
(which should hopefully keep the same behavior as before).

Then for aux ramblocks like ROMs, as long as it sets RAM_SHARED properly in
qemu_ram_alloc_internal() (but only when aux-share-mem=on, for sure..),
then the rest code path in ram_block_add() should be the same as when user
specified share=on in [1].

Anyway, if both of you agreed on it, I am happy to wait and read the whole
patch.

Side note: I'll still use a few days for other things, but I'll get back to
read this whole series before next week.. btw, this series does not depend
on precreate phase now, am I right?

> 
> - Steve
> 
> > IIUC, we could start to create RAM_SHARED in qemu_anon_ram_alloc() and
> > always cache the fd (even if we don't do that before)?
> > 
> > >      } else
> > >          new_block->fd = -1;
> > >          new_block->host = host;
> > >      }
> > >      ram_block_add(new_block);
> > > 
> > > qemu_ram_alloc_shared()
> > >      if qemu_memfd_check()
> > >          new_block->fd = qemu_memfd_create()
> > >      else
> > >          new_block->fd = qemu_shm_alloc()
> > >      new_block->host = file_ram_alloc(new_block->fd)
> > > ----
> > > 
> > > - Steve
> > > 
> > 
> 

-- 
Peter Xu


Reply via email to