While PGT_pae_xen_l2 will be zapped once the type refcount of an L2 page
reaches zero, it'll be retained as long as the type refcount is non-
zero. Hence any checking against the requested type needs to either zap
the bit from the type or include it in the used mask.

Fixes: 9186e96b199e ("x86/pv: Clean up _get_page_type()")
Signed-off-by: Jan Beulich <jbeul...@suse.com>
---
The check around the TLB flush which was moved for XSA-401 also looks to
needlessly trigger a flush when "type" has the bit set (while "x"
wouldn't). That's no different from original behavior, but still looks
inefficient.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2956,7 +2956,8 @@ static int _get_page_type(struct page_in
              * The page is in one of two states (depending on PGT_partial),
              * and should have exactly one reference.
              */
-            ASSERT((x & (PGT_type_mask | PGT_count_mask)) == (type | 1));
+            ASSERT((x & (PGT_type_mask | PGT_pae_xen_l2 | PGT_count_mask)) ==
+                   (type | 1));
 
             if ( !(x & PGT_partial) )
             {

Reply via email to