On Tue, 17 Oct 2017 16:35:15 +0100 "Daniel P. Berrange" <berra...@redhat.com> wrote:
> On Tue, Oct 17, 2017 at 05:21:13PM +0200, Laszlo Ersek wrote: > > On 10/17/17 16:48, Daniel P. Berrange wrote: > > > On Mon, Oct 16, 2017 at 07:01:01PM +0200, Paolo Bonzini wrote: > > >> On 16/10/2017 18:59, Eduardo Habkost wrote: > > >>>> +DEF("paused", HAS_ARG, QEMU_OPTION_paused, \ > > >>>> + "-paused [state=]postconf|preconf\n" > > >>>> + " postconf: pause QEMU after machine is > > >>>> initialized\n" > > >>>> + " preconf: pause QEMU before machine is > > >>>> initialized\n", > > >>>> + QEMU_ARCH_ALL) > > >>> I would like to allow pausing before machine-type is selected, so > > >>> management could run query-machines before choosing a > > >>> machine-type. Would that need a third "-pause" mode, or will we > > >>> be able to change "preconf" to pause before select_machine() is > > >>> called? > > >>> > > >>> The same probably applies to other things initialized before > > >>> machine_run_board_init() that could be configurable using QMP, > > >>> including but not limited to: > > >>> * Accelerator configuration > > >>> * Registering global properties > > >>> * RAM size > > >>> * SMP/CPU configuration > > >> > > >> Should (or could) "-M none" be changed in a backwards-compatible way to > > >> allow such preconfiguration? For example > > >> > > >> qemu -M none -monitor stdio > > >> (qemu) machine-set-options pc,accel=kvm > > >> (qemu) c > > > > > > Going down this route has pretty major implications for the way libvirt > > > manages QEMU, and support / debugging of it. When you look at the QEMU > > > command line libvirt uses it will be almost devoid of any useful info. > > > So it will be more involved job to figure out just how QEMU is configured. > > > This also means it is difficult to replicate the config that libvirt has > > > used, outside of libvirt for sake of debugging. > > > > > > I also think it will have pretty significant performance implications > > > for QEMU startup. To configure a guest via the monitor is going to > > > require a huge number of monitor commands to be executed to replicate > > > what we traditionally configured via ARGV. While each monitor command > > > is not massively slow, the round-trip time of each command will quickly > > > add up to several 100 milliseconds, perhaps even seconds in the the > > > case of very large configs. > > > > > > Maybe we ultimately have no choice and this is inevitable, but I am > > > pretty wary of going in the direction of launching bare QEMU and > > > configuring everything via a huge number of monitor calls. > > > > Where's the sweet spot between > > - configuring everything dynamically, over QMP, > > - and invoking QEMU separately, for querying capabilities etc? > > The key with the way we currently invoke & query QEMU over QMP to detect > capabilities is that this is not tied to a specific VM launch process. > We can query capabilities and cache them until such time as we detect > a QEMU binary change. So this never impacts on the startup performance > of individual VMs. The caching is critical, because querying capabilities > is actually quite time intensive already, taking many seconds to query > capabilities on all the different target binaries we have. is there another alternative for usecase where one option values depends (-numa cpu) on values of another option values (-M + -smp + -cpu)? so far we have 2 options on the table: 1: do configuration at runtime like in this series 2: start qemu 2 times 1st to query cpu layout and 2nd add -numa options using data from the 1st step > Regards, > Daniel