From: William Roche
A memory page poisoned from the hypervisor level is no longer readable.
The migration of a VM will crash Qemu when it tries to read the
memory address space and stumbles on the poisoned page with a similar
stack trace:
Program terminated with signal SIGBUS, Bus error.
#0
From: William Roche
Problem:
A Qemu VM can survive a memory error, as qemu can relay the error to the
VM kernel which could also deal with it -- poisoning/off-lining the impacted
page. This situation creates a hole in the VM memory address space (an
unreadable page or set of pages).
A
From: William Roche
The SIGBUS signal siginfo reporting a HW memory error
provides a si_addr_lsb fields with an indication of the
impacted memory page size.
This information should be used to track the hwpoisoned
page sizes.
Signed-off-by: William Roche
---
accel/kvm/kvm-all.c| 6
From: William Roche
When the VM reboots, a memory reset is performed calling
qemu_ram_remap() on all hwpoisoned pages.
We take into account the recorded page size to adjust the
size and location of the memory hole.
In case of a largepage used, we also need to punch a hole
in the backend file to
From: William Roche
We need to deal with hugetlbfs memory large pages facing HW errors,
to increase the probability to survive a memory poisoning.
When an error is detected, the platform kernel marks the entire
hugetlbfs large page as "poisoned" and reports the event to all
potential u
From: William Roche
Hello,
This is a Qemu RFC to introduce the possibility to deal with hardware
memory errors impacting hugetlbfs memory backed VMs. When using
hugetlbfs large pages, any large page location being impacted by an
HW memory error results in poisoning the entire page, suddenly
From: William Roche
madvise MADV_HWPOISON can generate a SIGBUS when called, so the listener
thread (the caller) needs to deal with this signal.
The signal handler recognizes a thread specific variable allowing it to
directly exit when generated from this thread.
Signed-off-by: William Roche
From: William Roche
In case the SIGBUS handler is triggered by a BUS_MCEERR_AO signal
and this handler needs to exit to let the VM pause during the memory
mapping change, this SIGBUS won't be regenerated when the VM resumes.
In this case we take note of this signal before exiting the handl
From: William Roche
Add the page size information to the hwpoison_page_list elements.
Signed-off-by: William Roche
---
accel/kvm/kvm-all.c | 11 +++
include/sysemu/kvm.h | 3 ++-
include/sysemu/kvm_int.h | 3 ++-
target/arm/kvm.c | 5 +++--
target/i386/kvm/kvm.c
From: William Roche
madvise MADV_HWPOISON can generate a SIGBUS when called, so the listener
thread (the caller) needs to deal with this signal.
The signal handler recognizes a thread specific variable allowing it to
directly exit when generated from this thread.
Signed-off-by: William Roche
From: William Roche
We need to deal with hugetlbfs memory large pages facing HW errors,
to increase the probability to survive a memory poisoning.
When an error is detected, the platform kernel marks the entire
hugetlbfs large page as "poisoned" and reports the event to all
potential u
From: William Roche
When the VM reboots, a memory reset is performed calling
qemu_ram_remap() on all hwpoisoned pages.
We take into account the recorded page size to adjust the
size and location of the memory hole.
In case of a largepage used, we also need to punch a hole
in the backend file to
From: William Roche
Add the page size information to the hwpoison_page_list elements.
Signed-off-by: William Roche
---
accel/kvm/kvm-all.c | 11 +++
include/sysemu/kvm.h | 3 ++-
include/sysemu/kvm_int.h | 3 ++-
target/arm/kvm.c | 5 +++--
target/i386/kvm/kvm.c
From: William Roche
In case the SIGBUS handler is triggered by a BUS_MCEERR_AO signal
and this handler needs to exit to let the VM pause during the memory
mapping change, this SIGBUS won't be regenerated when the VM resumes.
In this case we take note of this signal before exiting the handl
From: William Roche
The SIGBUS signal siginfo reporting a HW memory error
provides a si_addr_lsb fields with an indication of the
impacted memory page size.
This information should be used to track the hwpoisoned
page sizes.
Signed-off-by: William Roche
---
accel/kvm/kvm-all.c| 6
From: William Roche
Apologies for the noise; resending as I missed CC'ing the maintainers of the
changed files
Hello,
This is a Qemu RFC to introduce the possibility to deal with hardware
memory errors impacting hugetlbfs memory backed VMs. When using
hugetlbfs large pages, any large
On 9/10/24 13:36, David Hildenbrand wrote:
On 10.09.24 12:02, “William Roche wrote:
From: William Roche
Hi,
Apologies for the noise; resending as I missed CC'ing the maintainers
of the
changed files
Hello,
This is a Qemu RFC to introduce the possibility to deal with hardware
m
On 9/12/24 00:07, David Hildenbrand wrote:
Hi again,
This is a Qemu RFC to introduce the possibility to deal with hardware
memory errors impacting hugetlbfs memory backed VMs. When using
hugetlbfs large pages, any large page location being impacted by an
HW memory error results in poisoning th
From: William Roche
A memory page poisoned from the hypervisor level is no longer readable.
Thus, it is now treated as a zero-page for the ram saving migration phase.
The migration of a VM will crash Qemu when it tries to read the
memory address space and stumbles on the poisoned page with a
From: William Roche
A Qemu VM can survive a memory error, as qemu can relay the error to the
VM kernel which could also deal with it -- poisoning/off-lining the impacted
page.
This situation creates a hole in the VM memory address space that the VM kernel
knows about (an unreadable page or set
On 9/15/23 05:13, Zhijian Li (Fujitsu) wrote:
I'm okay with "RDMA isn't touched".
BTW, could you share your reproducing program/hacking to poison the page, so
that
i am able to take a look the RDMA part later when i'm free.
Not sure it's suitable to acknowledge a not touched part. Anyway
Acke
Hi John,
I'd like to put the emphasis on the fact that ignoring the SRAO error
for a VM is a real problem at least for a specific (rare) case I'm
currently working on: The VM migration.
Context:
- In the case of a poisoned page in the VM address space, the migration
can't read it and will skip
Thank you Zhijian for your feedback.
So I'll try to push this change today.
Cheers,
William.
On 9/20/23 12:04, Zhijian Li (Fujitsu) wrote:
On 15/09/2023 19:31, William Roche wrote:
On 9/15/23 05:13, Zhijian Li (Fujitsu) wrote:
I'm okay with "RDMA isn't touched&qu
From: William Roche
A Qemu VM can survive a memory error, as qemu can relay the error to the
VM kernel which could also deal with it -- poisoning/off-lining the impacted
page.
This situation creates a hole in the VM memory address space that the VM kernel
knows about (an unreadable page or set
From: William Roche
A memory page poisoned from the hypervisor level is no longer readable.
Thus, it is now treated as a zero-page for the ram saving migration phase.
The migration of a VM will crash Qemu when it tries to read the
memory address space and stumbles on the poisoned page with a
On 9/21/23 19:41, Yazen Ghannam wrote:
On 9/20/23 7:13 AM, Joao Martins wrote:
On 18/09/2023 23:00, William Roche wrote:
[...]
So it looks like the mechanism works fine... unless the VM has migrated
between the SRAO error and the first time it really touches the poisoned
page to get an SRAR
On 9/22/23 16:30, Yazen Ghannam wrote:
On 9/22/23 4:36 AM, William Roche wrote:
On 9/21/23 19:41, Yazen Ghannam wrote:
[...]
Also, during page migration, does the data flow through the CPU core?
Sorry for the basic question. I haven't done a lot with virtualization.
Yes, in most cases
From: William Roche
A memory page poisoned from the hypervisor level is no longer readable.
Thus, it is now treated as a zero-page for the ram saving migration phase.
The migration of a VM will crash Qemu when it tries to read the
memory address space and stumbles on the poisoned page with a
From: William Roche
A Qemu VM can survive a memory error, as qemu can relay the error to the
VM kernel which could also deal with it -- poisoning/off-lining the impacted
page.
This situation creates a hole in the VM memory address space that the VM kernel
knows about (an unreadable page or set
On 9/6/23 17:16, Peter Xu wrote:
Just a note..
Probably fine for now to reuse block page size, but IIUC the right thing to
do is to fetch it from the signal info (in QEMU's sigbus_handler()) of
kernel_siginfo.si_addr_lsb.
At least for x86 I think that stores the "shift" of covered poisoned pag
On 9/7/23 13:12, Gupta, Pankaj wrote:
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 5fce74aac5..4d42d3ed4c 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -604,6 +604,10 @@ static void kvm_mce_inject(X86CPU *cpu, hwaddr
paddr, int code)
mcg_
From: William Roche
A memory page poisoned from the hypervisor level is no longer readable.
Thus, it is now treated as a zero-page for the ram saving migration phase.
The migration of a VM will crash Qemu when it tries to read the
memory address space and stumbles on the poisoned page with a
From: William Roche
A Qemu VM can survive a memory error, as qemu can relay the error to the
VM kernel which could also deal with it -- poisoning/off-lining the impacted
page.
This situation creates a hole in the VM memory address space that the VM kernel
knows about (an unreadable page or set
From: William Roche
Migrating a poisoned page as a zero-page can only be done when the
running guest kernel knows about this poison, so that it marks this
page as unaccessible and any access in the VM would fail.
But if a poison information is not relayed to the VM, the kernel
does not prevent
o the kvm_hwpoison_page_add function in kvm_arch_on_sigbus_vcpu with:
kvm_hwpoison_page_add(ram_addr, (code == BUS_MCEERR_AR));
Of course we'll have to wait for this above patch to be integrated first.
HTH,
William.
On 9/19/23 00:00, William Roche wrote:
> Hi John,
>
>
On 10/16/23 18:48, Peter Xu wrote:
On Fri, Oct 13, 2023 at 03:08:39PM +, “William Roche wrote:
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
index 5e95c496bb..e8db6380c1 100644
--- a/target/arm/kvm64.c
+++ b/target/arm/kvm64.c
@@ -1158,7 +1158,6 @@ void kvm_arch_on_sigbus_vcpu
On 10/17/23 17:13, Peter Xu wrote:
On Tue, Oct 17, 2023 at 02:38:48AM +0200, William Roche wrote:
On 10/16/23 18:48, Peter Xu wrote:
On Fri, Oct 13, 2023 at 03:08:39PM +, “William Roche wrote:
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
index 5e95c496bb..e8db6380c1 100644
--- a
From: William Roche
Note about ARM specificities:
This code has a small part impacting more specificaly ARM machines,
that's the reason why I added qemu-...@nongnu.org -- see description.
A Qemu VM can survive a memory error, as qemu can relay the error to the
VM kernel which could also
From: William Roche
Migrating a poisoned page as a zero-page can only be done when the
running guest kernel knows about this poison, so that it marks this
page as inaccessible and any access in the VM would fail.
But if a poison information is not relayed to the VM, the kernel
does not prevent
From: William Roche
A memory page poisoned from the hypervisor level is no longer readable.
Thus, it is now treated as a zero-page for the ram saving migration phase.
The migration of a VM will crash Qemu when it tries to read the
memory address space and stumbles on the poisoned page with a
On 11/8/23 22:45, Peter Xu wrote:
On Mon, Nov 06, 2023 at 10:38:14PM +0100, William Roche wrote:
But it implies a lot of other changes:
- The source has to flag the error pages to indicate a poison
(new flag in the exchange protocole)
- The destination has to be able to deal
d suggest to add a 3rd patch implementing this AMD specific filter:
commit bf8cc74df3fcc7bf958a7c42b876e9c059fe4d06
Author: William Roche
Date: Thu Aug 31 18:54:57 2023 +
i386: Explicitly ignore unsupported BUS_MCEERR_AO MCE on AMD guest
AMD guests can't currently deal with
platforms.
Reported-by: William Roche
Signed-off-by: John Allen
---
target/i386/helper.c | 4
target/i386/kvm/kvm.c | 17 +++--
2 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/target/i386/helper.c b/target/i386/helper.c
index 533b29cb91..a6523858e0 100644
On 10/9/24 17:45, Peter Xu wrote:
On Thu, Sep 19, 2024 at 06:52:37PM +0200, William Roche wrote:
Hello David,
I hope my last week email answered your interrogations about:
- retrieving the valid data from the lost hugepage
- the need of smaller pages to replace a failed large page
From: William Roche
Add the page size information to the hwpoison_page_list elements.
As the kernel doesn't always report the actual poisoned page size,
we adjust this size from the backend real page size.
We take into account the recorded page size to adjust the size
and location of the m
From: William Roche
The SIGBUS signal siginfo reporting a HW memory error
provides a si_addr_lsb field with an indication of the
impacted memory page size.
This information should be used to track the hwpoisoned
page sizes.
Signed-off-by: William Roche
---
accel/kvm/kvm-all.c| 6
From: William Roche
This set of patches fixes several problems with hardware memory errors
impacting hugetlbfs memory backed VMs. When using hugetlbfs large
pages, any large page location being impacted by an HW memory error
results in poisoning the entire page, suddenly making a large chunk of
From: William Roche
On HW memory error, we need to report better what the impact of this
error is. So when an entire large page is impacted by an error (like the
hugetlbfs case), we give a warning message when this page is first hit:
Memory error: Loosing a large page (size: X) at QEMU addr Y
From: William Roche
When the VM reboots, a memory reset is performed calling
qemu_ram_remap() on all hwpoisoned pages.
While we take into account the recorded page sizes to repair the
memory locations, a large page also needs to punch a hole in the
backend file to regenerate a usable memory
On 10/28/24 18:01, David Hildenbrand wrote:
On 26.10.24 01:27, William Roche wrote:
On 10/23/24 09:30, David Hildenbrand wrote:
On 22.10.24 23:35, “William Roche wrote:
From: William Roche
When the VM reboots, a memory reset is performed calling
qemu_ram_remap() on all hwpoisoned pages
On 10/28/24 17:42, David Hildenbrand wrote:
On 26.10.24 01:27, William Roche wrote:
On 10/23/24 09:28, David Hildenbrand wrote:
On 22.10.24 23:35, “William Roche wrote:
From: William Roche
Add the page size information to the hwpoison_page_list elements.
As the kernel doesn't always r
On 11/12/24 14:45, David Hildenbrand wrote:
On 07.11.24 11:21, “William Roche wrote:
From: David Hildenbrand
Let's register a RAM block notifier and react on remap notifications.
Simply re-apply the settings. Warn only when something goes wrong.
Note: qemu_ram_remap() will not remap
On 11/12/24 11:30, David Hildenbrand wrote:
On 07.11.24 11:21, “William Roche wrote:
From: William Roche
When a memory page is added to the hwpoison_page_list, include
the page size information. This size is the backend real page
size. To better deal with hugepages, we create a single entry
On 11/12/24 12:07, David Hildenbrand wrote:
On 07.11.24 11:21, “William Roche wrote:
From: William Roche
We take into account the recorded page sizes to repair the
memory locations, calling ram_block_discard_range() to punch a hole
in the backend file when necessary and regenerate a usable
On 11/12/24 12:13, David Hildenbrand wrote:
On 07.11.24 11:21, “William Roche wrote:
From: William Roche
When an entire large page is impacted by an error (hugetlbfs case),
report better the size and location of this large memory hole, so
give a warning message when this page is first hit
Hello David,
I hope my last week email answered your interrogations about:
- retrieving the valid data from the lost hugepage
- the need of smaller pages to replace a failed large page
- the interaction of memory error and VM migration
- the non-symmetrical access to a poisoned me
On 10/23/24 09:28, David Hildenbrand wrote:
On 22.10.24 23:35, “William Roche wrote:
From: William Roche
Add the page size information to the hwpoison_page_list elements.
As the kernel doesn't always report the actual poisoned page size,
we adjust this size from the backend real page
On 10/23/24 09:30, David Hildenbrand wrote:
On 22.10.24 23:35, “William Roche wrote:
From: William Roche
When the VM reboots, a memory reset is performed calling
qemu_ram_remap() on all hwpoisoned pages.
While we take into account the recorded page sizes to repair the
memory locations, a
On 10/23/24 09:28, David Hildenbrand wrote:
On 22.10.24 23:35, “William Roche wrote:
From: William Roche
Add the page size information to the hwpoison_page_list elements.
As the kernel doesn't always report the actual poisoned page size,
we adjust this size from the backend real page siz
From: William Roche
Hi David,
Here is an updated description of the patch set:
---
This set of patches fixes several problems with hardware memory errors
impacting hugetlbfs memory backed VMs. When using hugetlbfs large
pages, any large page location being impacted by an HW memory error
From: David Hildenbrand
Notify registered listeners about the remap at the end of
qemu_ram_remap() so e.g., a memory backend can re-apply its
settings correctly.
Signed-off-by: David Hildenbrand
Signed-off-by: William Roche
---
hw/core/numa.c | 11 +++
include/exec/ramlist.h
From: William Roche
When an entire large page is impacted by an error (hugetlbfs case),
report better the size and location of this large memory hole, so
give a warning message when this page is first hit:
Memory error: Loosing a large page (size: X) at QEMU addr Y and GUEST addr Z
Signed-off
From: William Roche
Merging and dump settings are handled by the remap notification
in addition to memory policy and preallocation.
If preallocation is set on a memory block, qemu_prealloc_mem()
call is needed also after a ram_block_discard_range() use for
this block.
Signed-off-by: William
igned-off-by: David Hildenbrand
Signed-off-by: William Roche
---
backends/hostmem.c | 29 +
include/sysemu/hostmem.h | 1 +
2 files changed, 30 insertions(+)
diff --git a/backends/hostmem.c b/backends/hostmem.c
index bf85d716e5..fbd8708664 100644
--- a/bac
From: William Roche
We take into account the recorded page sizes to repair the
memory locations, calling ram_block_discard_range() to punch a hole
in the backend file when necessary and regenerate a usable memory.
Fall back to unmap/remap the memory location(s) if the kernel doesn't
suppor
From: David Hildenbrand
We want to reuse the functionality when remapping or resizing RAM.
Signed-off-by: David Hildenbrand
Signed-off-by: William Roche
---
backends/hostmem.c | 155 -
1 file changed, 82 insertions(+), 73 deletions(-)
diff --git a
From: William Roche
When a memory page is added to the hwpoison_page_list, include
the page size information. This size is the backend real page
size. To better deal with hugepages, we create a single entry
for the entire page.
Signed-off-by: William Roche
---
accel/kvm/kvm-all.c | 8
:
On 12.11.24 19:17, William Roche wrote:
On 11/12/24 12:13, David Hildenbrand wrote:
On 07.11.24 11:21, “William Roche wrote:
From: William Roche
When an entire large page is impacted by an error (hugetlbfs case),
report better the size and location of this large memory hole, so
give a wa
Hello David,
I've finally tested many page mapping possibilities and tried to
identify the error injection reaction on these pages to see if mmap()
can be used to recover the impacted area.
I'm using the latest upstream kernel I have for that:
6.12.0-rc7.master.20241117.ol9.x86_64
But I also g
On 12/2/24 17:00, David Hildenbrand wrote:
On 02.12.24 16:41, William Roche wrote:
Hello David,
Hi,
sorry for reviewing yet, I was rather sick the last 1.5 weeks.
I hope you get well soon!
I've finally tested many page mapping possibilities and tried to
identify the error inje
From: David Hildenbrand
Notify registered listeners about the remap at the end of
qemu_ram_remap() so e.g., a memory backend can re-apply its
settings correctly.
Signed-off-by: David Hildenbrand
Signed-off-by: William Roche
---
hw/core/numa.c | 11 +++
include/exec/ramlist.h
From: William Roche
Hi David,
Here is an new version of our code, but I still need to double check
the mmap behavior in case of a memory error impact on:
- a clean page of an empty file or populated file
- already mapped using MAP_SHARED or MAP_PRIVATE
to see if mmap() can recover the area or
From: William Roche
The list of hwpoison pages used to remap the memory on reset
is based on the backend real page size. When dealing with
hugepages, we create a single entry for the entire page.
Co-developed-by: David Hildenbrand
Signed-off-by: William Roche
---
accel/kvm/kvm-all.c
From: David Hildenbrand
We want to reuse the functionality when remapping or resizing RAM.
Signed-off-by: David Hildenbrand
Signed-off-by: William Roche
---
backends/hostmem.c | 155 -
1 file changed, 82 insertions(+), 73 deletions(-)
diff --git a
From: William Roche
Merging and dump settings are handled by the remap notification
in addition to memory policy and preallocation.
If preallocation is set on a memory block, qemu_prealloc_mem()
call is needed also after a ram_block_discard_range() use for
this block.
Signed-off-by: William
ff-by: David Hildenbrand
Signed-off-by: William Roche
---
backends/hostmem.c | 34 ++
include/sysemu/hostmem.h | 1 +
2 files changed, 35 insertions(+)
diff --git a/backends/hostmem.c b/backends/hostmem.c
index bf85d716e5..863f6da11d 100644
--- a/bac
From: William Roche
Repair memory locations, calling ram_block_discard_range(),
punching a hole in the backend file when necessary and regenerate
a usable memory.
Fall back to unmap/remap the memory location(s) if the kernel doesn't
support the madvise calls used by ram_block_discard_
From: William Roche
In case of a large page impacted by a memory error, complete
the existing Qemu error message to indicate that the error is
injected in the VM. Also include a simlar message to the ARM
platform.
Only in the case of a large page impacted, we now report:
...Memory Error at QEMU
On 12/3/24 15:08, David Hildenbrand wrote:
[...]
Let me take a look at your tool below if I can find an explanation of
what is happening, because it's weird :)
[...]
At the end of this email, I included the source code of a simplistic
test case that shows that the page is replaced in the c
On 2/4/25 18:01, Peter Xu wrote:
On Sat, Feb 01, 2025 at 09:57:23AM +, “William Roche wrote:
From: William Roche
In case of a large page impacted by a memory error, provide an
information about the impacted large page before the memory
error injection message.
This message would also
On 2/5/25 18:07, Peter Xu wrote:
On Wed, Feb 05, 2025 at 05:27:13PM +0100, William Roche wrote:
[...]
The HMP command "info ramblock" is implemented with the ram_block_format()
function which returns a message buffer built with a string for each
ramblock (protected by the RCU_READ_
On 2/4/25 18:09, Peter Xu wrote:
On Sat, Feb 01, 2025 at 09:57:22AM +, “William Roche wrote:
From: William Roche
Repair poisoned memory location(s), calling ram_block_discard_range():
punching a hole in the backend file when necessary and regenerating
a usable memory.
If the kernel
On 2/4/25 21:16, Peter Xu wrote:
On Tue, Feb 04, 2025 at 07:55:52PM +0100, David Hildenbrand wrote:
Ah, and now I remember where these 3 patches originate from: virtio-mem
handling.
For virtio-mem I want to register also a remap handler, for example, to
perform the custom preallocation handling
On 2/10/25 17:48, Peter Xu wrote:
On Fri, Feb 07, 2025 at 07:02:22PM +0100, William Roche wrote:
[...]
So the main reason is a KVM "weakness" with kvm_send_hwpoison_signal(), and
the second reason is to have richer error messages.
This seems true, and I also remember something whe
From: William Roche
Repair poisoned memory location(s), calling ram_block_discard_range():
punching a hole in the backend file when necessary and regenerating
a usable memory.
If the kernel doesn't support the madvise calls used by this function
and we are dealing with anonymous memory,
From: William Roche
Generate an x86 similar error injection message on ras enabled ARM
platforms.
ARM qemu only deals with action required memory errors signaled with
SIGBUS/BUS_MCEERR_AR, and will report a message on every memory error
relayed to the VM. A message like:
Guest Memory Error at
From: William Roche
The list of hwpoison pages used to remap the memory on reset
is based on the backend real page size.
To correctly handle hugetlb, we must mmap(MAP_FIXED) a complete
hugetlb page; hugetlb pages cannot be partially mapped.
Signed-off-by: William Roche
Co-developed-by: David
From: William Roche
Here is a very simplified version of my fix only dealing with the
recovery of huge pages on VM reset.
---
This set of patches fixes an existing bug with hardware memory errors
impacting hugetlbfs memory backed VMs and its recovery on VM reset.
When using hugetlbfs large
From: William Roche
Let's register a RAM block notifier and react on remap notifications.
Simply re-apply the settings. Exit if something goes wrong.
Merging and dump settings are handled by the remap notification
in addition to memory policy and preallocation.
Co-developed-by:
From: William Roche
The list of hwpoison pages used to remap the memory on reset
is based on the backend real page size.
To correctly handle hugetlb, we must mmap(MAP_FIXED) a complete
hugetlb page; hugetlb pages cannot be partially mapped.
Signed-off-by: William Roche
Co-developed-by: David
From: David Hildenbrand
Notify registered listeners about the remap at the end of
qemu_ram_remap() so e.g., a memory backend can re-apply its
settings correctly.
Signed-off-by: David Hildenbrand
Signed-off-by: William Roche
---
hw/core/numa.c | 11 +++
include/exec/ramlist.h
On 1/30/25 18:02, David Hildenbrand wrote:
On 27.01.25 22:31, “William Roche wrote:
From: William Roche
In case of a large page impacted by a memory error, provide an
information about the impacted large page before the memory
error injection message.
This message would also appear on ras
From: William Roche
Hello David,
Here is the version with the small nits corrected.
And the 'Acked-by' entries you gave me for patch 1 and 2.
---
This set of patches fixes several problems with hardware memory errors
impacting hugetlbfs memory backed VMs and the generic memory reco
From: David Hildenbrand
We want to reuse the functionality when remapping RAM.
Signed-off-by: David Hildenbrand
Signed-off-by: William Roche
---
backends/hostmem.c | 155 -
1 file changed, 82 insertions(+), 73 deletions(-)
diff --git a/backends
From: William Roche
In case of a large page impacted by a memory error, provide an
information about the impacted large page before the memory
error injection message.
This message would also appear on ras enabled ARM platforms, with
the introduction of an x86 similar error injection message
From: William Roche
Repair poisoned memory location(s), calling ram_block_discard_range():
punching a hole in the backend file when necessary and regenerating
a usable memory.
If the kernel doesn't support the madvise calls used by this function
and we are dealing with anonymous memory,
From: William Roche
Merging and dump settings are handled by the remap notification
in addition to memory policy and preallocation.
Signed-off-by: William Roche
---
system/physmem.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/system/physmem.c b/system/physmem.c
index 9fc74a5699
From: David Hildenbrand
Notify registered listeners about the remap at the end of
qemu_ram_remap() so e.g., a memory backend can re-apply its
settings correctly.
Signed-off-by: David Hildenbrand
Signed-off-by: William Roche
---
hw/core/numa.c | 11 +++
include/exec/ramlist.h
ff-by: David Hildenbrand
Signed-off-by: William Roche
---
backends/hostmem.c | 34 ++
include/sysemu/hostmem.h | 1 +
2 files changed, 35 insertions(+)
diff --git a/backends/hostmem.c b/backends/hostmem.c
index bf85d716e5..863f6da11d 100644
--- a/bac
From: William Roche
Repair poisoned memory location(s), calling ram_block_discard_range():
punching a hole in the backend file when necessary and regenerating
a usable memory.
If the kernel doesn't support the madvise calls used by this function
and we are dealing with anonymous memory,
1 - 100 of 140 matches
Mail list logo