As described in https://lore.kernel.org/r/20201116104216.439650-1-david.edmond...@oracle.com, I'd like to reduce the amount of memory consumed by QEMU mapping UEFI images on aarch64.
To recap: > Currently ARM UEFI images are typically built as 2MB/768kB flash > images for code and variables respectively. These images are both > then padded out to 64MB before being loaded by QEMU. > > Because the images are 64MB each, QEMU allocates 128MB of memory to > read them, and then proceeds to read all 128MB from disk (dirtying > the memory). Of this 128MB less than 3MB is useful - the rest is > zero padding. > > On a machine with 100 VMs this wastes over 12GB of memory. There were objections to my previous patch because it changed the size of the regions reported to the guest via the memory map (the reported size depended on the size of the image). This is a smaller patch which changes the memory region that covers the entire region to be IO rather than RAM, and loads the flash image into a smaller sub-region that is the more traditional mixed IO/ROMD type. All read/write operations to areas outside of the underlying block device are handled directly (reads return 0, writes fail or are discarded). This reduces the memory consumption for the AAVMF code image from 64MiB to around 2MB and that for the AAVMF vars from 64MiB to 768KiB (presuming that the UEFI images are adjusted accordingly). For read-only devices (such as the AAVMF code) this seems completely safe. For writable devices there is a change in behaviour - previously it was possible to write anywhere in the extent of the flash device, read back the data written and have that data persist through a restart of QEMU. This is no longer the case - writes outside of the extent of the underlying backing block device will be discarded. That is, a read after a write will *not* return the written data, either immediately or after a QEMU restart - it will return zeros. Looking at the AAVMF implementation, it seems to me that if the initial VARS image is prepared as being 768KiB in size (which it is), none of AAVMF itself will attempt to write outside of that extent, and so I believe that this is an acceptable compromise. It would be relatively straightforward to allow writes outside of the backing device to persist for the lifetime of a particular QEMU by allocating memory on demand (i.e. when there is a write to the relevant region). This would allow a read to return the relevant data, but only until a QEMU restart, at which point the data would be lost. There was a suggestion in a previous thread that perhaps the pflash driver could be re-worked to use the block IO interfaces to access the underlying device "on demand" rather than reading in the entire image at startup (at least, that's how I understood the comment). I looked at implementing this and struggled to get it to work for all of the required use cases. Specifically, there are several code paths that expect to retrieve a pointer to the flat memory image of the pflash device and manipulate it directly (examples include the Malta board and encrypted memory support on x86), or write the entire image to storage (after migration). My implementation was based around mapping the flash region only for IO, which meant that every read or write had to be handled directly by the pflash driver (there was no ROMD style operation), which also made booting an aarch64 VM noticeably slower - getting through the firmware went from under 1 second to around 10 seconds. v2: - Unify the approach for both read-only and writable devices, saving another 63MiB per QEMU instance. David Edmondson (3): hw/pflash_cfi*: Replace DPRINTF with trace events hw/pflash_cfi01: Correct the type of PFlashCFI01.ro hw/pflash_cfi01: Allow devices to have a smaller backing device hw/block/pflash_cfi01.c | 190 +++++++++++++++++++++++++--------------- hw/block/pflash_cfi02.c | 75 ++++++---------- hw/block/trace-events | 42 +++++++-- 3 files changed, 179 insertions(+), 128 deletions(-) -- 2.30.0