The justification for dropping p2m_mmio_dm from p2m_is_valid() was wrong
for two of the shadow mode uses.

In _sh_propagate() we want to create special L1 entries for p2m_mmio_dm
pages. Hence we need to make sure we don't bail early for that type.

In _sh_page_fault() we want to handle p2m_mmio_dm by forwarding to
(internal or external) emulation. Pull the !p2m_is_mmio() check out of
the || expression (as otherwise it would need adding to the lhs as
well).

In both cases, p2m_is_valid() in combination with p2m_is_grant() still
doesn't cover foreign mappings. Hence use p2m_is_any_ram() plus (as
necessary) p2m_mmio_* instead.

Fixes: be59cceb2dbb ("x86/P2M: don't include MMIO_DM in p2m_is_valid()")
Reported-by: Luca Fancellu <luca.fance...@arm.com>
Signed-off-by: Jan Beulich <jbeul...@suse.com>
---
This still leaves the p2m_mmio_dm vs p2m_invalid unaddressed.

While propagating #PF to the guest based on P2M type isn't quite right,
not doing so in sh_page_fault() would lead to no forward progress in the
guest anymore: If we put in place a non-present shadow PTE, another #PF
will occur right after exiting to the guest. Doing so is only okay as a
transient measure, e.g. while paging back in a paged-out page (where the
respective type is included in P2M_RAM_TYPES).

There could likely be at least one more Fixes: tag (to cover the lack of
handling foreign mappings); the one supplied is merely to indicate the
connection to the recent regression.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -471,9 +471,7 @@ _sh_propagate(struct vcpu *v,
     /* We don't shadow PAE l3s */
     ASSERT(GUEST_PAGING_LEVELS > 3 || level != 3);
 
-    /* Check there's something for the shadows to map to */
-    if ( (!p2m_is_valid(p2mt) && !p2m_is_grant(p2mt))
-         || !gfn_valid(d, target_gfn) )
+    if ( !gfn_valid(d, target_gfn) )
     {
         *sp = shadow_l1e_empty();
         goto done;
@@ -503,6 +501,13 @@ _sh_propagate(struct vcpu *v,
         goto done;
     }
 
+    /* Check there's something for the shadows to map to */
+    if ( !p2m_is_any_ram(p2mt) && p2mt != p2m_mmio_direct )
+    {
+        *sp = shadow_l1e_empty();
+        goto done;
+    }
+
     // Must have a valid target_mfn unless this is a prefetch or an l1
     // pointing at MMIO space.  In the case of a prefetch, an invalid
     // mfn means that we can not usefully shadow anything, and so we
@@ -2366,8 +2371,8 @@ static int cf_check sh_page_fault(
     gmfn = get_gfn(d, gfn, &p2mt);
 
     if ( shadow_mode_refcounts(d) &&
-         ((!p2m_is_valid(p2mt) && !p2m_is_grant(p2mt)) ||
-          (!p2m_is_mmio(p2mt) && !mfn_valid(gmfn))) )
+         !p2m_is_mmio(p2mt) &&
+         (!p2m_is_any_ram(p2mt) || !mfn_valid(gmfn)) )
     {
         perfc_incr(shadow_fault_bail_bad_gfn);
         SHADOW_PRINTK("BAD gfn=%"SH_PRI_gfn" gmfn=%"PRI_mfn"\n",

Reply via email to