On Fri, Feb 24, 2017 at 01:48:46PM +0000, Peter Maydell wrote: > On 22 February 2017 at 18:39, Eduardo Habkost <ehabk...@redhat.com> wrote: > > Changes v1 -> v2: > > * Fix build without CONFIG_KVM at lmce_supported() > > * Rebased on top of my x86-next branch: > > https://github.com/ehabkost/qemu x86-next > > > > Git branch for testing: > > https://github.com/ehabkost/qemu-hacks work/x86-cpu-max-tcg > > > > libvirt code to use the new feature already exist, and were > > submitted to libvir-list, at: > > https://www.mail-archive.com/libvir-list@redhat.com/msg142168.html > > > > --- > > > > This is a replacement for the previous series that enabled the > > "host" CPU model on TCG. Now a new "max" CPU is being added, > > while keeping "host" KVM-specific. > > > > In addition to simply adding "max" as a copy of the existing > > "host" CPU model, additional patches change it to not use any > > host CPUID information when in TCG mode. > > > I had a look at implementing this for ARM, and ran into problems > because of how we've done '-cpu host'. For us the "host" CPU > type is registered dynamically when kvm_init() is called, > because (a) it only exists if -enable-kvm and (b) it probes > the kernel to find out what's available. So I could easily > add 'max' in the same place; but then where should I add the > type definition of 'max' for the non-KVM case?
I recommend registering the type unconditionally, moving KVM-specific initialization to ->instance_init() and/or ->realize() (being careful to make instance_init() not crash if KVM is disabled), and make ->realize() fail if KVM is disabled. IIRC, we did exactly that on x86 a while ago. -- Eduardo