cpufeat_mask() yields an unsigned integer constant. As a result, taking its complement causes zero extention rather than sign extention.
The result is that, when a guest OS has OXSAVE disabled, all features in 1d are hidden from native CPUID. Amongst other things, this causes the early code in Linux to find no LAPIC, but for everything to appear fine later when userspace is up and running. Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com> --- CC: Jan Beulich <jbeul...@suse.com> I wonder whether a better fix might be to put an explicit (int) cast in cpufeat_mask() to yield an signed constant? --- xen/arch/x86/cpu/intel.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c index a9355cbf..7b60aaa 100644 --- a/xen/arch/x86/cpu/intel.c +++ b/xen/arch/x86/cpu/intel.c @@ -192,7 +192,7 @@ static void intel_ctxt_switch_levelling(const struct vcpu *next) */ if (next && is_pv_vcpu(next) && !is_idle_vcpu(next) && !(next->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE)) - val &= ~cpufeat_mask(X86_FEATURE_OSXSAVE); + val &= ~(uint64_t)cpufeat_mask(X86_FEATURE_OSXSAVE); if (unlikely(these_masks->_1cd != val)) { wrmsrl(msr_basic, val); -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel