On 5/14/2021 7:28 AM, Stefan Hajnoczi wrote: > On Thu, May 13, 2021 at 04:21:02PM -0400, Steven Sistare wrote: >> On 5/12/2021 12:19 PM, Stefan Hajnoczi wrote: >>> On Fri, May 07, 2021 at 05:25:05AM -0700, Steve Sistare wrote: >>>> To use the restart mode, qemu must be started with the memfd-alloc machine >>>> option. The memfd's are saved to the environment and kept open across >>>> exec, >>>> after which they are found from the environment and re-mmap'd. Hence guest >>>> ram is preserved in place, albeit with new virtual addresses in the qemu >>>> process. The caller resumes the guest by calling cprload, which loads >>>> state from the file. If the VM was running at cprsave time, then VM >>>> execution resumes. cprsave supports any type of guest image and block >>>> device, but the caller must not modify guest block devices between cprsave >>>> and cprload. >>> >>> Does QEMU's existing -object memory-backend-file on tmpfs or hugetlbfs >>> achieve the same thing? >> >> Not quite. Various secondary anonymous memory objects are allocated via >> ram_block_add >> and must be preserved, such as these on x86_64. >> vga.vram >> pc.ram >> pc.bios >> pc.rom >> vga.rom >> rom@etc/acpi/tables >> rom@etc/table-loader >> rom@etc/acpi/rsdp >> >> Even the read-only areas must be preserved rather than recreated from files >> in the updated >> qemu, as their contents may have changed. > > Migration knows how to save/load these RAM blocks. Only pc.ram is > significant in size so I'm not sure it's worth special-casing the > others?
Some of these are mapped for vfio dma as a consequence of the normal memory region callback to consumers code. We get conflict errors vs those existing vfio mappings if they are recreated and remapped in the new process. The memfd option is a simple and robust solution to that issue. - Steve