On Tue, Oct 17, 2017 at 05:42:19PM +0200, Laszlo Ersek wrote: > On 10/17/17 17:35, Daniel P. Berrange wrote: > > On Tue, Oct 17, 2017 at 05:21:13PM +0200, Laszlo Ersek wrote: > >> On 10/17/17 16:48, Daniel P. Berrange wrote: > >>> On Mon, Oct 16, 2017 at 07:01:01PM +0200, Paolo Bonzini wrote: > >>>> On 16/10/2017 18:59, Eduardo Habkost wrote: > >>>>>> +DEF("paused", HAS_ARG, QEMU_OPTION_paused, \ > >>>>>> + "-paused [state=]postconf|preconf\n" > >>>>>> + " postconf: pause QEMU after machine is > >>>>>> initialized\n" > >>>>>> + " preconf: pause QEMU before machine is > >>>>>> initialized\n", > >>>>>> + QEMU_ARCH_ALL) > >>>>> I would like to allow pausing before machine-type is selected, so > >>>>> management could run query-machines before choosing a > >>>>> machine-type. Would that need a third "-pause" mode, or will we > >>>>> be able to change "preconf" to pause before select_machine() is > >>>>> called? > >>>>> > >>>>> The same probably applies to other things initialized before > >>>>> machine_run_board_init() that could be configurable using QMP, > >>>>> including but not limited to: > >>>>> * Accelerator configuration > >>>>> * Registering global properties > >>>>> * RAM size > >>>>> * SMP/CPU configuration > >>>> > >>>> Should (or could) "-M none" be changed in a backwards-compatible way to > >>>> allow such preconfiguration? For example > >>>> > >>>> qemu -M none -monitor stdio > >>>> (qemu) machine-set-options pc,accel=kvm > >>>> (qemu) c > >>> > >>> Going down this route has pretty major implications for the way libvirt > >>> manages QEMU, and support / debugging of it. When you look at the QEMU > >>> command line libvirt uses it will be almost devoid of any useful info. > >>> So it will be more involved job to figure out just how QEMU is configured. > >>> This also means it is difficult to replicate the config that libvirt has > >>> used, outside of libvirt for sake of debugging. > >>> > >>> I also think it will have pretty significant performance implications > >>> for QEMU startup. To configure a guest via the monitor is going to > >>> require a huge number of monitor commands to be executed to replicate > >>> what we traditionally configured via ARGV. While each monitor command > >>> is not massively slow, the round-trip time of each command will quickly > >>> add up to several 100 milliseconds, perhaps even seconds in the the > >>> case of very large configs. > >>> > >>> Maybe we ultimately have no choice and this is inevitable, but I am > >>> pretty wary of going in the direction of launching bare QEMU and > >>> configuring everything via a huge number of monitor calls. > >> > >> Where's the sweet spot between > >> - configuring everything dynamically, over QMP, > >> - and invoking QEMU separately, for querying capabilities etc? > > > > The key with the way we currently invoke & query QEMU over QMP to detect > > capabilities is that this is not tied to a specific VM launch process. > > We can query capabilities and cache them until such time as we detect > > a QEMU binary change. So this never impacts on the startup performance > > of individual VMs. The caching is critical, because querying capabilities > > is actually quite time intensive already, taking many seconds to query > > capabilities on all the different target binaries we have. > > (Sorry about hijacking the thread, but I can't stop asking :) ) > > This looks very smart -- for my own education, how does libvirtd detect > a QEMU binary change? Based on executable mtime, size, checksum? Are > perhaps the <emulator> elements of individual domains involved?
We store the capabilities info in an XML file in /var, and this contains the ctime of libvirtd and or qemu, as well as libvirt version number. If any of those change, the cache is invalidated. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|