On Tue, 17 Oct 2017 17:09:26 +0100 "Daniel P. Berrange" <berra...@redhat.com> wrote:
> On Tue, Oct 17, 2017 at 06:06:35PM +0200, Igor Mammedov wrote: > > On Tue, 17 Oct 2017 16:07:59 +0100 > > "Daniel P. Berrange" <berra...@redhat.com> wrote: > > > > > On Tue, Oct 17, 2017 at 09:27:02AM +0200, Igor Mammedov wrote: > > > > On Mon, 16 Oct 2017 17:36:36 +0100 > > > > "Daniel P. Berrange" <berra...@redhat.com> wrote: > > > > > > > > > On Mon, Oct 16, 2017 at 06:22:50PM +0200, Igor Mammedov wrote: > > > > > > Series allows to configure NUMA mapping at runtime using QMP/HMP > > > > > > interface. For that to happen it introduces a new '-paused' CLI > > > > > > option > > > > > > which allows to pause QEMU before machine_init() is run and > > > > > > adds new set-numa-node HMP/QMP commands which in conjuction with > > > > > > info hotpluggable-cpus/query-hotpluggable-cpus allow to configure > > > > > > NUMA mapping for cpus. > > > > > > > > > > What's the problem we're seeking solve here compared to what we > > > > > currently > > > > > do for NUMA configuration ? > > > > From RHBZ1382425 > > > > " > > > > Current -numa CLI interface is quite limited in terms that allow map > > > > CPUs to NUMA nodes as it requires to provide cpu_index values which > > > > are non obvious and depend on machine/arch. As result libvirt has to > > > > assume/re-implement cpu_index allocation logic to provide valid > > > > values for -numa cpus=... QEMU CLI option. > > > > > > In broad terms, this problem applies to every device / object libvirt > > > asks QEMU to create. For everything else libvirt is able to assign a > > > "id" string, which is can then use to identify the thing later. The > > > CPU stuff is different because libvirt isn't able to provide 'id' > > > strings for each CPU - QEMU generates a psuedo-id internally which > > > libvirt has to infer. The latter is the same problem we had with > > > devices before '-device' was introduced allowing 'id' naming. > > > > > > IMHO we should take the same approach with CPUs and start modelling > > > the individual CPUs as something we can explicitly create with -object > > > or -device. That way libvirt can assign names and does not have to > > > care about CPU index values, and it all works just the same way as > > > any other devices / object we create > > > > > > ie instead of: > > > > > > -smp 8,sockets=4,cores=2,threads=1 > > > -numa node,nodeid=0,cpus=0-3 > > > -numa node,nodeid=1,cpus=4-7 > > > > > > we could do: > > > > > > -object numa-node,id=numa0 > > > -object numa-node,id=numa1 > > > -object cpu,id=cpu0,node=numa0,socket=0,core=0,thread=0 > > > -object cpu,id=cpu1,node=numa0,socket=0,core=1,thread=0 > > > -object cpu,id=cpu2,node=numa0,socket=1,core=0,thread=0 > > > -object cpu,id=cpu3,node=numa0,socket=1,core=1,thread=0 > > > -object cpu,id=cpu4,node=numa1,socket=2,core=0,thread=0 > > > -object cpu,id=cpu5,node=numa1,socket=2,core=1,thread=0 > > > -object cpu,id=cpu6,node=numa1,socket=3,core=0,thread=0 > > > -object cpu,id=cpu7,node=numa1,socket=3,core=1,thread=0 > > the follow up question would be where do "socket=3,core=1,thread=0" > > come from, currently these options are the function of > > (-M foo -smp ...) and can be queried vi query-hotpluggble-cpus at > > runtime after qemu parses -M and -smp options. > > The sockets/cores/threads topology of CPUs is something that comes from > the libvirt guest XML config in this case things for libvirt to implement would be to know following details: 1: which machine/machine version support which set of attributes 2: valid values for these properties depending on machine/machine version/cpu type > Regards, > Daniel