On Fri, Feb 07, 2025 at 07:02:22PM +0100, William Roche wrote: > On 2/5/25 18:07, Peter Xu wrote: > > On Wed, Feb 05, 2025 at 05:27:13PM +0100, William Roche wrote: > > > [...] > > > The HMP command "info ramblock" is implemented with the ram_block_format() > > > function which returns a message buffer built with a string for each > > > ramblock (protected by the RCU_READ_LOCK_GUARD). Our new function copies a > > > struct with the necessary information. > > > > > > Relaying on the buffer format to retrieve the information doesn't seem > > > reasonable, and more importantly, this buffer doesn't provide all the > > > needed > > > data, like fd and fd_offset. > > > > > > I would say that ram_block_format() and qemu_ram_block_info_from_addr() > > > serve 2 different goals. > > > > > > (a reimplementation of ram_block_format() with an adapted version of > > > qemu_ram_block_info_from_addr() taking the extra information needed could > > > be > > > doable for example, but may not be worth doing for now) > > > > IIUC admin should be aware of fd_offset because the admin should be fully > > aware of the start offset of FDs to specify in qemu cmdlines, or in > > Libvirt. But yes, we can always add fd_offset into ram_block_format() if > > it's helpful. > > > > Besides, the existing issues on this patch: > > > > - From outcome of this patch, it introduces one ramblock API (which is ok > > to me, so far), to do some error_report()s. It looks pretty much for > > debugging rather than something serious (e.g. report via QMP queries, > > QMP events etc.). From debug POV, I still don't see why this is > > needed.. per discussed above. > > The reason why I want to inform the user of a large memory failure more > specifically than a standard sized page loss is because of the significant > behavior difference: Our current implementation can transparently handle > many situations without necessarily leading the VM to a crash. But when it > comes to large pages, there is no mechanism to inform the VM of a large > memory loss, and usually this situation leads the VM to crash, and can also > generate some weird situations like qemu itself crashing or a loop of > errors, for example. > > So having a message informing of such a memory loss can help to understand a > more radical VM or qemu behavior -- it increases the diagnosability of our > code. > > To verify that a SIGBUS appeared because of a large page loss, we currently > need to verify the targeted memory block backend page_size. > We should usually get this information from the SIGBUS siginfo data (with a > si_addr_lsb field giving an indication of the page size) but a KVM weakness > with a hardcoded si_addr_lsb=PAGE_SHIFT value in the SIGBUS siginfo returned > from the kernel prevents that: See kvm_send_hwpoison_signal() function. > > So I first wrote a small API addition called qemu_ram_pagesize_from_addr() > to retrieve only this page_size value from the impacted address; and later > on, this function turned into the richer qemu_ram_block_info_from_addr() > function to have the generated messages match the existing memory messages > as rightly requested by David. > > So the main reason is a KVM "weakness" with kvm_send_hwpoison_signal(), and > the second reason is to have richer error messages.
This seems true, and I also remember something when I looked at this previously but maybe nobody tried to fix it. ARM seems to be correct on that field, otoh. Is it possible we fix KVM on x86? kvm_handle_error_pfn() has the fault context, so IIUC it should be able to figure that out too like what ARM does (with get_vma_page_shift()). > > > > > - From merge POV, this patch isn't a pure memory change, so I'll need to > > get ack from other maintainers, at least that should be how it works.. > > I agree :) > > > > > I feel like when hwpoison becomes a serious topic, we need some more > > serious reporting facility than error reports. So that we could have this > > as separate topic to be revisited. It might speed up your prior patches > > from not being blocked on this. > > I explained why I think that error messages are important, but I don't want > to get blocked on fixing the hugepage memory recovery because of that. What is the major benefit of reporting in QEMU's stderr in this case? For example, how should we consume the error reports that this patch introduces? Is it still for debugging purpose? I agree it's always better to dump something in QEMU when such happened, but IIUC what I mentioned above (by monitoring QEMU ramblock setups, and monitor host dmesg on any vaddr reported hwpoison) should also allow anyone to deduce the page size of affected vaddr, especially if it's for debugging purpose. However I could possibly have missed the goal here.. > > If you think that not displaying a specific message for large page loss can > help to get the recovery fixed, than I can change my proposal to do so. > > Early next week, I'll send a simplified version of my first 3 patches > without this specific messages and without the preallocation handling in all > remap cases, so you can evaluate this possibility. Yes IMHO it'll always be helpful to separate it if possible. Thanks, -- Peter Xu