----- Original Message -----
> From: "Eduardo Habkost" <ehabk...@redhat.com>
> To: "Igor Mammedov" <imamm...@redhat.com>
> Cc: "Greg Kurz" <gr...@kaod.org>, "Thomas Huth" <th...@redhat.com>, 
> qemu-devel@nongnu.org, pbonz...@redhat.com
> Sent: Tuesday, August 2, 2016 8:00:06 PM
> Subject: Re: [Qemu-devel] [PATCH] numa: set the memory backend "is_mapped" 
> field
> 
> On Tue, Jul 19, 2016 at 12:07:53PM +0200, Igor Mammedov wrote:
> > On Tue, 19 Jul 2016 10:28:35 +0200
> > Greg Kurz <gr...@kaod.org> wrote:
> > 
> > > Commit 2aece63 "hostmem: detect host backend memory is being used
> > > properly"
> > > added a way to know if a memory backend is busy or available for use. It
> > > caused a slight regression if we pass the same backend to a NUMA node and
> > > to a pc-dimm device:
> > > 
> > > -m 1G,slots=2,maxmem=2G \
> > > -object memory-backend-ram,size=1G,id=mem-mem1 \
> > > -device pc-dimm,id=dimm-mem1,memdev=mem-mem1 \
> > > -numa node,nodeid=0,memdev=mem-mem1
> > > 
> > > Before commit 2aece63, this would cause QEMU to print an error message
> > > and
> > > to exit gracefully:
> > > 
> > > qemu-system-ppc64: -device pc-dimm,id=dimm-mem1,memdev=mem-mem1:
> > >     can't use already busy memdev: mem-mem1
> > > 
> > > Since commit 2aece63, QEMU hits an assertion in the memory code:
> > > 
> > > qemu-system-ppc64: memory.c:1934: memory_region_add_subregion_common:
> > >     Assertion `!subregion->container' failed.
> > > Aborted
> > > 
> > > This happens because pc-dimm devices don't use memory_region_is_mapped()
> > > anymore and cannot guess the backend is already used by a NUMA node.
> > > 
> > > Let's revert to the previous behavior by turning the NUMA code to also
> > > call host_memory_backend_set_mapped() when it uses a backend.
> > > 
> > > Fixes: 2aece63c8a9d2c3a8ff41d2febc4cdeff2633331
> > > Signed-off-by: Greg Kurz <gr...@kaod.org>
> > Reviewed-by: Igor Mammedov <imamm...@redhat.com>
> 
> I have just noticed that this fell through the cracks, sorry.
> Applied to numa-next. Thanks!

Actually I am just finishing tests of a pull request that included it
(because I was the one that caused the regression).  Ok for me to just
handle this one patch?

Paolo

Reply via email to