On 06/02/2019 10:54, Jan Beulich wrote:
>>>> On 06.02.19 at 10:57, <andrew.coop...@citrix.com> wrote:
>> On 06/02/2019 07:40, Jan Beulich wrote:
>>> Benign exceptions, no matter whether they're first or second, will never
>>> cause #DF (a benign exception being second can basically only be #AC, as
>>> in the XSA-156 scenario).
>> #DB can happen as well, but I'm not sure if this example is relevant
>> here.  Both of those loops where repeated exceptions caused by trying to
>> deliver the first exception, rather than an issue of priority amongst
>> simultaneous exceptions during the execution of an instruction.
> No, I don't think #DB falls into this category (I had it here before
> and then removed it): If a data breakpoint hits while delivering an
> exception or interrupt, that - being a trap - will be delivered _after_
> the original event, i.e. on the first instruction of the other handler.

First of all, be careful to separate #DB faults from traps.  They behave
differently.

With specific reference to Intel Table 6-2:

The two #DB faulting cases are instruction breakpoints and general
detect.  Instruction breakpoints (when not masked by RF) have their own
priority, 7, and occur as discrete step of checking %rip (this is why
they don't match in the middle of an instruction). General detect is
priority 10, and occurs as part of executing a mov %dr instruction.

Exceptions raised at either of these two priorities discard further work
while processing the instruction which is why they have fault properties
(specifically, no writeback of %rip and other state).


The internals of %dr6 maintenance is exposed by VT-x, in the PENDING_DBG
control.  Across an instruction, all matching bits accumulate.

Data breakpoints accumulate from all memory operands in the load/store
queue tagged with the same ISA instruction.  This includes all memory
actions microcode performs on behalf of the instruction, as they need
ordering architecturally WRT other ISA instructions.  Task trap is
accumulated as an explicit side effect of the task switch microcode,
while singlestep is accumulated at instruction retirement (with some
still as-yet-undocumented instruction-specific behaviour, part of which
was XSA-204).

Across the instruction writeback boundary (priority 10 around to 1),
PENDING_DBG persists, and feeds into exception considerations at
priority 2 and 4.  This is why they manifest as traps, but they can be
equally thought of as faults before the fetch of the next instruction.

> That's also the way the XSA-156 advisory describes it.

XSA-156 was written at a time when we both had far less authority to
comment on the specific details.  Certainly as far as I am concerned,
the past couple of years have made a massive difference, not least the
several months spent debugging the STI/singlestep issue with Gil.

The mistake in XSA-156 is the description of "upon completion of the
delivery of the first exception".

The infinite loop occurs because delivery of the first #DB is never
considered complete (as #DB is still considered pending once the
exception frame was written, because it was triggered in the process of
delivering the exception), and therefore does not move from priority 4
to 5, which would allow an NMI to break the cycle.

Similarly for the #AC case, priority never moves from 10 back to 1,
because delivery of the first #AC is never seen to have completed.

>> For VT-x, we're supposed to fill in the PENDING_DBG control rather than
>> raise #DB directly, explicitly to allow the priority of interrupts to be
>> expressed correctly.  There is a very large quantity of untangling
>> working to do, and very clear I'm not going to have time to fix even the
>> STI/Singlestep issue in the 4.12 timeframe.
> Are you saying there need to be any vendor specific adjustments
> to this function then? I would very much hope that the code here
> could remain vendor independent, with vendor specific adjustments
> done in vendor specific code, and independently of the proposed
> change here.

I think the better course of action is for {vmx,svm}_inject_event() to
gain adjustments to their #DB handling.

The PENDING_DBG control is specifically an accumulation of various
debugging conditions, which will cause #DB traps to be delivered at the
appropriate point early in the next instruction cycle.

Of course (as none of this was designed with effects of XSA-156 in
mind), the complication is the fact that the later delivery will then
hit the #DB intercept, and need to actually be injected properly as #DB
rather than feeding back into PENDING_DBG (to avoid a livelock).


The lack of PENDING_DBG on SVM probably means we need to emulate the
injection of the first benign exception and queue the #DB up in the
EVENTINJ field.  The current emulation misses the #AC case (which can be
argued as a bug or a feature, but ultimately is not in line with real
CPU behaviour).  That said, I'm not sure if we can actually provide
architectural behaviour on SVM, as we have no way of causing #DB traps
to be considered at the correct priority.

>
>>> Sadly neither AMD nor Intel really define what happens with two benign
>>> exceptions - the term "sequentially" used by both is poisoned by how the
>>> combining of benign and non-benign exceptions is described. Since NMI,
>>> #MC, and hardware interrupts are all benign and (perhaps with the
>>> exception of #MC) can't occur second, favor the first in order to not
>>> lose it.
>> #MC has the highest priority so should only be recognised immediately
>> after an instruction boundary.
> Are you sure? What about an issue with one of the memory
> accesses involved in delivering a previously raised exception?

#MC is abort, and is imprecise.  It bears no relationship to the content
of the iret frame it is observed with.  This is why details of the issue
are stored in MSRs and not on the stack - the CPU can be hundreds of
instructions further on before an L3 cache/IIO #MC propagates back
through the uncore.

Despite being classified as an exception, #MC behaves much more like an
NMI from the OS point of view, except that getting two of them
back-to-back is totally fatal.

>
>> I don't however see a way of stacking #AC, because you can't know that
>> one has occured until later in the instruction cycle than all other
>> sources.  What would happen is that you'd raise #AC from previous
>> instruction, and then recognise #MC while starting to execute the #AC
>> entry point.  (I think)
> Well - see XSA-156 for what hardware does in that situation.

The details of XSA-156 are inaccurate.

#AC can stack, but the problem only manifests when Xen emulates an
injection, which is restricted to SVM for the moment.

That said, I'm considering moving it back to being common to provide
architectural behaviour despite the silicon issue which causes XSA-170

> Besides eliminating that security issue, I don't think this is a
> very important case to deal with correctly, unless you're aware
> of OSes which allow handling #AC in ring 3.
>
> Irrespective of all of the above - what am I to take from your
> response? I.e. what adjustments (within the scope of this patch)
> do you see necessary for the change to become acceptable?

By and large, I think the actual code changes are ok.  They are a
general improvement, but I am not comfortable with the factual
inaccuracies in the commit message and the comments.

I will see if I can rephrase it suitably.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to