On 11.03.2026 10:25, Alejandro Vallejo wrote:
> On Wed Mar 11, 2026 at 9:35 AM CET, Jan Beulich wrote:
>> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -3832,69 +3832,47 @@ int hvm_descriptor_access_intercept(uint64_t 
>>> exit_info,
>>>      return X86EMUL_OKAY;
>>>  }
>>>  
>>> -static bool cf_check is_cross_vendor(
>>> -    const struct x86_emulate_state *state, const struct x86_emulate_ctxt 
>>> *ctxt)
>>> -{
>>> -    switch ( ctxt->opcode )
>>> -    {
>>> -    case X86EMUL_OPC(0x0f, 0x05): /* syscall */
>>> -    case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
>>> -    case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
>>> -        return true;
>>> -    }
>>> -
>>> -    return false;
>>> -}
>>> -
>>>  void hvm_ud_intercept(struct cpu_user_regs *regs)
>>>  {
>>>      struct vcpu *cur = current;
>>> -    bool should_emulate =
>>> -        cur->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor;
>>>      struct hvm_emulate_ctxt ctxt;
>>> +    const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>> +    uint32_t walk = PFEC_insn_fetch;
>>> +    unsigned long addr;
>>> +    char sig[5]; /* ud2; .ascii "xen" */
>>>  
>>> -    hvm_emulate_init_once(&ctxt, opt_hvm_fep ? NULL : is_cross_vendor, 
>>> regs);
>>> +    if ( !opt_hvm_fep )
>>> +        goto reinject;
>>
>> Is this possible at all, i.e. shouldn't there be ASSERT_UNREACHABLE() in
>> addition if already the check is kept?
> 
> It isn't.
> 
> v2 used to BUG_ON() at VMEXIT when !HVM_FEP and compile out this handler
> altogether, but Andrew was unhappy with it because he uses it occasionally and
> it'd be more of a PITA to undo the removal or force a HVM_FEP-enabled 
> hypervisor
> for the #UD handler to be present at all.
> 
> I have no strong views on the ASSERT. It's not expected to happen, but I don't
> expect the existing conditions to change either, and if they do that will 
> warrant
> a change in the handler too. 
> 
> If you want it I can add it, but if we're not killing the handler in release I
> don't think it's very helpful to assert/bug_on.

I see two options: Drop the if() or add ASSERT_UNREACHABLE() to its body.

>>> -    if ( opt_hvm_fep )
>>> -    {
>>> -        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>> -        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>> -                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>>
>> Why is this initializer not retained?
> 
> It is, it's just that the diff is terrible. An unfortunate side effect of the
> removal of the braces. The scope collapsing forces it on top of the function,
> before the emulation context is initialised.
> 
> It's set up in steps. walk is unconditionally initialised as isnsn_fetch, and
> later (after emulate_init_once()), OR'd with PFEC_user_mode for DPL == 3. 
> See...
> 
>>
>>> -        unsigned long addr;
>>> -        char sig[5]; /* ud2; .ascii "xen" */
>>> -
>>> -        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>>> -                                        sizeof(sig), hvm_access_insn_fetch,
>>> -                                        cs, &addr) &&
>>> -             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
>>> -                                         walk, NULL) == HVMTRANS_okay) &&
>>> -             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
>>> -        {
>>> -            regs->rip += sizeof(sig);
>>> -            regs->eflags &= ~X86_EFLAGS_RF;
>>> +    hvm_emulate_init_once(&ctxt, NULL, regs);
>>>  
>>> -            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
>>> -            if ( !(hvm_long_mode_active(cur) && cs->l) )
>>> -                regs->rip = (uint32_t)regs->rip;
>>> +    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>>> +        walk |= PFEC_user_mode;
> 
> ... here.

But that's the point of my question: Why did you split it? All you mean to
do is re-indentation.

Jan

Reply via email to