On 24/12/2019 13:26, Roger Pau Monne wrote: > Use Xen's L0 HVMOP_flush_tlbs hypercall when available in order to > perform flushes. This greatly increases the performance of tlb flushes > when running with a high amount of vCPUs as a Xen guest, and is > specially important when running in shim mode. > > The following figures are from a PV guest running `make -j342 xen` in > shim mode with 32 vCPUs. > > Using x2APIC and ALLBUT shorthand: > real 4m35.973s > user 4m35.110s > sys 36m24.117s > > Using L0 assisted flush: > real 1m17.391s > user 4m42.413s > sys 6m20.773s
Nice stats. > > Signed-off-by: Roger Pau Monné <roger....@citrix.com> > --- > xen/arch/x86/guest/xen/xen.c | 11 +++++++++++ > xen/arch/x86/smp.c | 6 ++++++ > xen/include/asm-x86/guest/xen.h | 7 +++++++ > 3 files changed, 24 insertions(+) > > diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c > index 6dbc5f953f..e6493caecf 100644 > --- a/xen/arch/x86/guest/xen/xen.c > +++ b/xen/arch/x86/guest/xen/xen.c > @@ -281,6 +281,17 @@ int xg_free_unused_page(mfn_t mfn) > return rangeset_remove_range(mem, mfn_x(mfn), mfn_x(mfn)); > } > > +int xg_flush_tlbs(void) > +{ > + int rc; > + > + do { > + rc = xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); > + } while ( rc == -ERESTART ); ERESTART should never manifest like this, because it is taken care of within the hypercall_page[] stub. Anything else is a bug which needs fixing at L0. Have you actually seen one appearing? ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel