Add zhangyi -----Original Message----- From: Marc-André Lureau [mailto:marcandre.lur...@gmail.com] Sent: Wednesday, September 19, 2018 6:30 PM To: Dr. David Alan Gilbert <dgilb...@redhat.com> Cc: He, Junyan <junyan...@intel.com>; Laszlo Ersek <ler...@redhat.com>; Eduardo Habkost <ehabk...@redhat.com>; Michael S. Tsirkin <m...@redhat.com>; Stefan Berger <stef...@linux.vnet.ibm.com>; QEMU <qemu-devel@nongnu.org>; Paolo Bonzini <pbonz...@redhat.com>; Igor Mammedov <imamm...@redhat.com>; Richard Henderson <r...@twiddle.net>; alex.william...@redhat.com Subject: Re: [Qemu-devel] [PATCH v10 6/6] tpm: add ACPI memory clear interface
Hi On Tue, Sep 18, 2018 at 7:49 PM Dr. David Alan Gilbert <dgilb...@redhat.com> wrote: > > * Marc-André Lureau (marcandre.lur...@gmail.com) wrote: > > Hi > > > > On Tue, Sep 11, 2018 at 6:19 PM Laszlo Ersek <ler...@redhat.com> wrote: > > > > > > +Alex, due to mention of 21e00fa55f3fd > > > > > > On 09/10/18 15:03, Marc-André Lureau wrote: > > > > Hi > > > > > > > > On Mon, Sep 10, 2018 at 2:44 PM Dr. David Alan Gilbert > > > > <dgilb...@redhat.com> wrote: > > > >> (I didn't know about guest_phys_block* and would have probably > > > >> just used qemu_ram_foreach_block ) > > > >> > > > > > > > > guest_phys_block*() seems to fit, as it lists only the blocks > > > > actually used, and already skip the device RAM. > > > > > > > > Laszlo, you wrote the functions > > > > (https://git.qemu.org/?p=qemu.git;a=commit;h=c5d7f60f0614250bd92 > > > > 5071e25220ce5958f75d0), do you think it's appropriate to list > > > > the memory to clear, or we should rather use > > > > qemu_ram_foreach_block() ? > > > > > > Originally, I would have said, "use either, doesn't matter". > > > Namely, when I introduced the guest_phys_block*() functions, the > > > original purpose was not related to RAM *contents*, but to RAM > > > *addresses* (GPAs). This is evident if you look at the direct > > > child commit of c5d7f60f0614, namely 56c4bfb3f07f, which put > > > GuestPhysBlockList to use. > > > And, for your use case (= wiping RAM), GPAs don't matter, only > > > contents matter. > > > > > > However, with the commits I mentioned previously, namely > > > e4dc3f5909ab9 and 21e00fa55f3fd, we now filter out some RAM blocks > > > from the dumping based on contents / backing as well. I think? So > > > I believe we should honor that for the wiping to. I guess I'd > > > (vaguely) suggest using guest_phys_block*(). > > > > > > (And then, as Dave suggests, maybe extend the filter to consider > > > pmem too, separately.) > > > > I looked a bit into skipping pmem memory. The issue is that RamBlock > > and MemoryRegion have no idea they are actually used for nvram (you > > could rely on hostmem.pmem flag, but this is optional), and I don't > > see a clear way to figure this out. > > I think the pmem flag is what we should use; the problem though is we That would be much simpler. But What if you setup a nvdimm backend by a non-pmem memory? It will always be cleared? What about platforms that do not support libpmem? > have two different pieces of semantics: > a) PMEM - needs special flush instruction/calls > b) PMEM - my data is persistent, please don't clear me > > Do those always go together? > > (Copying in Junyan He who added the RAM_PMEM flag) > > > I can imagine to retrieve the MemoryRegion from guest phys address, > > then check the owner is TYPE_NVDIMM for example. Is this a good > > solution? > > No, I think it's upto whatever creates the region to set a flag > somewhere properly - there's no telling whether it'll always be NVDIMM > or some other object. We could make the owner object set a flag on the MemoryRegion, or implement a common NV interface. > > Dave > > > There is memory_region_from_host(), is there a memory_region_from_guest() ? > > > > thanks > > > > > > -- > > Marc-André Lureau > -- > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK -- Marc-André Lureau