On Wed, 2018-01-17 at 18:26 +0100, David Woodhouse wrote:
> 
> > In both switching to idle, and back to the vCPU, we should hit this
> > case and not the 'else' case that does the IBPB:
> > 
> > 1710     if ( (per_cpu(curr_vcpu, cpu) == next) ||
> > 1711          (is_idle_domain(nextd) && cpu_online(cpu)) )
> > 1712     {
> > 1713         local_irq_enable();
> > 1714     }
> 
> I tested that; it doesn't seem to be the case. We end up here with prev
> being the idle thread, next being the actual vCPU, and
> per_cpu(curr_vcpu, cpu) being the idle thread too. So we still do the
> IBPB even when we've just switch from a given vCPU to idle and back
> again.
> 
> That's not actually tested on Xen master, but the code here looks very
> much the same as what I actually did test.

This appears to make the excessive IBPBs go away. There might be better
approaches.

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 04e9902..b8a9d54 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -68,6 +68,7 @@
 #include <asm/pv/mm.h>
 #include <asm/spec_ctrl.h>
 
+DEFINE_PER_CPU(struct vcpu *, last_vcpu); /* Last non-idle vCPU */
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
 static void default_idle(void);
@@ -1745,8 +1746,14 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 
         ctxt_switch_levelling(next);
 
-        if ( opt_ibpb )
+        /* IBPB on switching to a non-idle vCPU, if that vCPU was not
+         * the last one to be scheduled on this pCPU */
+        if ( opt_ibpb && !is_idle_cpu(next) &&
+             per_cpu(last_vcpu, cpu) != next )
+        {
+            per_cpu(last_vcpu, cpu) = next;
             wrmsrl(MSR_PRED_CMD, PRED_CMD_IBPB);
+        }
     }
 
     context_saved(prev);

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to