On 02.01.2025 14:38, Jürgen Groß wrote:
> On 02.01.25 13:53, David Woodhouse wrote:
>> On Thu, 2025-01-02 at 13:07 +0100, Jürgen Groß wrote:
>>> On 23.12.24 15:24, David Woodhouse wrote:
>>>> On Tue, 2024-12-17 at 12:18 +0000, Xen.org security team wrote:
>>>>>                Xen Security Advisory CVE-2024-53241 / XSA-466
>>>>>                                   version 3
>>>>>
>>>>>            Xen hypercall page unsafe against speculative attacks
>>>>>
>>>>> UPDATES IN VERSION 3
>>>>> ====================
>>>>>
>>>>> Update of patch 5, public release.
>>>>
>>>> Can't we even use the hypercall page early in boot? Surely we have to
>>>> know whether we're running on an Intel or AMD CPU before we get to the
>>>> point where we can enable any of the new control-flow integrity
>>>> support? Do we need to jump through those hoops do do that early
>>>> detection and setup?
>>>
>>> The downside of this approach would be to have another variant to do
>>> hypercalls. So you'd have to replace the variant being able to use AMD
>>> or INTEL specific instructions with a function doing the hypercall via
>>> the hypercall page.
>>
>> You'd probably start with the hypercall function just jumping directly
>> into the temporary hypercall page during early boot, and then you'd
>> update them to use the natively prepared vmcall/vmmcall version later.
>>
>> All the complexity of patching and CPU detection in early boot seems to
>> be somewhat gratuitous and even counter-productive given the change it
>> introduces to 64-bit latching.
>>
>> And even if the 64-bit latch does happen when HVM_PARAM_CALLBACK_IRQ is
>> set, isn't that potentially a lot later in boot? Xen will be treating
>> this guest as 32-bit until then, so won't all the vcpu_info and
>> runstate structures be wrong even as the secondary CPUs are already up
>> and running?
> 
> What I don't get is why this latching isn't done when the shared info
> page is mapped into the guest via the XENMAPSPACE_shared_info hypercall
> or maybe additionally when VCPUOP_register_runstate_memory_area is being
> used by the guest.

The respective commit (6c13b7b80f02) lacking details, my guess is that
because at that point both operations you mention didn't have HVM-specific
logic (yet), the first HVM-specific operation used by the PV ("unmodified")
drivers was selected. pv-ops (having a different init sequence) appeared
only later, and was then (seemingly) sufficiently covered by the latching
done when the hypercall page was initialized (which was added a few months
after said commit).

Jan

Reply via email to