On 22.10.24 23:35, “William Roche wrote:
From: William Roche <william.ro...@oracle.com>
When the VM reboots, a memory reset is performed calling
qemu_ram_remap() on all hwpoisoned pages.
While we take into account the recorded page sizes to repair the
memory locations, a large page also needs to punch a hole in the
backend file to regenerate a usable memory, cleaning the HW
poisoned section. This is mandatory for hugetlbfs case for example.
Signed-off-by: William Roche <william.ro...@oracle.com>
---
system/physmem.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/system/physmem.c b/system/physmem.c
index 3757428336..3f6024a92d 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -2211,6 +2211,14 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
prot = PROT_READ;
prot |= block->flags & RAM_READONLY ? 0 : PROT_WRITE;
if (block->fd >= 0) {
+ if (length > TARGET_PAGE_SIZE && fallocate(block->fd,
+ FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE,
+ offset + block->fd_offset, length) != 0) {
+ error_report("Could not recreate the file hole for "
+ "addr: " RAM_ADDR_FMT "@" RAM_ADDR_FMT "",
+ length, addr);
+ exit(1);
+ }
area = mmap(vaddr, length, prot, flags, block->fd,
offset + block->fd_offset);
} else {
Ah! Just what I commented to patch #3; we should be using
ram_discard_range(). It might be better to avoid the mmap() completely
if ram_discard_range() worked.
And as raised, there is the problem with memory preallocation (where we
should fail if it doesn't work) and ram discards being disabled because
something relies on long-term page pinning ...
--
Cheers,
David / dhildenb