On Fri, Dec 14, 2012 at 06:47:20PM +0100, Igor Mammedov wrote: > On Fri, 14 Dec 2012 15:36:22 -0200 > Eduardo Habkost <ehabk...@redhat.com> wrote: > > > On Fri, Dec 14, 2012 at 06:20:41PM +0100, Andreas Färber wrote: > > > Am 14.12.2012 17:52, schrieb Eduardo Habkost: > > > > On Fri, Dec 14, 2012 at 04:14:32PM +0100, Andreas Färber wrote: > > > >> Am 12.12.2012 23:22, schrieb Eduardo Habkost: > [...] > > > > > > >> The clock code using first_cpu looks solvable; what about CR4 and MSR > > > >> helpers, how performance-sensitive are they? (if they're not yet using > > > >> X86CPU for something else) > > > > > > > > I guess any CPU-state code inside QEMU is not performance-sensitive, as > > > > it woud already require switching between KVM kernelspace and QEMU > > > > userspace. > > > > > > I mean target-i386/[misc_]helper.c and thus TCG, IIUC. :) > > > > Oh, right. I wonder how much performance impact it would have, if people > > are already using TCG. > > > > Anyway, would this really have any impact at all? I mean: > > ENV_GET_CPU(env) is basically subtracing an constant offset from 'env'. > > So I expect similar code to be generated, just using a different offset > > from 'env' to get the cpuid_features field. > ENV_GET_CPU(env) does dynamic_cast which is expensive.
Oh, I didn't notice that. So the alternatives I see are: - Use ENV_GET_CPU() and risk performance problems; - Write a FAST_ENV_GET_CPU() macro for performance-sensitive code, that doesn't use dynamic_cast; - Keep the fields on CPUX86State and move them only after we change the TCG code to use the QOM CPU object. Personally, I prefer the third option. Moving fields from CPUArchState before making most code use the CPU QOM objects instead of CPUArchState sounds like a painful task. -- Eduardo