On 13/01/2026 4:12 pm, Alejandro Vallejo wrote:
>>> --- a/xen/arch/x86/cpu/microcode/intel.c
>>> +++ b/xen/arch/x86/cpu/microcode/intel.c
>>> @@ -408,17 +408,20 @@ static const char __initconst intel_cpio_path[] =
>>>      "kernel/x86/microcode/GenuineIntel.bin";
>>>  
>>>  static const struct microcode_ops __initconst_cf_clobber intel_ucode_ops = 
>>> {
>>> -    .cpu_request_microcode            = cpu_request_microcode,
>>> +    .cpu_request_microcode            = 
>>> MICROCODE_OP(cpu_request_microcode),
>>>      .collect_cpu_info                 = collect_cpu_info,
>>> -    .apply_microcode                  = apply_microcode,
>>> -    .compare                          = intel_compare,
>>> -    .cpio_path                        = intel_cpio_path,
>>> +    .apply_microcode                  = MICROCODE_OP(apply_microcode),
>>> +    .compare                          = MICROCODE_OP(intel_compare),
>>> +    .cpio_path                        = MICROCODE_OP(intel_cpio_path),
>>>  };
>> While I appreciate the intention with MICROCODE_OP(), I'm not really happy
>> with function pointer members left in place just for them to be NULL
>> everywhere. What if a call site remains unguarded? With PV guests that
>> would be a privilege escalation XSA.
> I see where you're coming from, but these are already NULL if microcode
> loading is not exposed by the underlying hypervisor (if any), or is blocked by
> hardware in Intel, so arguably that worry is orthogonal to this.

Also because they're cf_clobber, the calls are turned into UDs.  We
won't follow a function pointer to 0.

~Andrew

Reply via email to