On 02/09/2025 11:56 am, Frediano Ziglio wrote: > Try to allocate larger order pages. > With some test memory program stressing TLB (many small random > memory accesses) you can get 15% performance improves. > On the first memory iteration the sender is currently sending > memory in 4mb aligned chunks which allows the receiver to > allocate most pages as 2mb superpages instead of single 4kb pages. > This works even for HVM where the first 2mb contains some holes. > This change does not handle 1gb superpages as this will require > change in the protocol to preallocate space. > > Signed-off-by: Frediano Ziglio <frediano.zig...@cloud.com>
This was given a release ack, which should have been retained. > --- > Changes since v1: > - updated commit message and subject; > - change the implementation detecting possible 2mb pages inside > the packet sent allowing more 2mb superpages. > --- > tools/libs/guest/xg_sr_restore.c | 77 ++++++++++++++++++++++++++++++++ > 1 file changed, 77 insertions(+) > > diff --git a/tools/libs/guest/xg_sr_restore.c > b/tools/libs/guest/xg_sr_restore.c > index 06231ca826..f2018299a7 100644 > --- a/tools/libs/guest/xg_sr_restore.c > +++ b/tools/libs/guest/xg_sr_restore.c > @@ -129,6 +129,80 @@ static int pfn_set_populated(struct xc_sr_context *ctx, > xen_pfn_t pfn) > return 0; > } > > +#if defined(__i386__) || defined(__x86_64__) > +/* Order of the smallest superpage */ > +#define SMALL_SUPERPAGE_ORDER 9 > +#else > +#error Define SMALL_SUPERPAGE_ORDER for this platform > +#endif > + > +static unsigned int populate_order(struct xc_sr_context *ctx, > + unsigned int original_count, > + xen_pfn_t *pfns, xen_pfn_t *mfns, > + int order) > +{ > + size_t i = original_count, num_superpages; > + xen_pfn_t prev = 0, order_mask = ~((~(xen_pfn_t)0) << order); > + xen_pfn_t *const indexes_end = mfns + original_count; > + xen_pfn_t *indexes = indexes_end; > + unsigned int count = 0; > + > + while ( i > 0 ) > + { > + --i; > + ++count; > + if ( pfns[i] != prev - 1 ) > + count = 1; > + > + /* > + * Is this the start of a contiguous and aligned number > + * of pages ? > + */ > + if ( (pfns[i] & order_mask) == 0 && count > order_mask ) > + *--indexes = i; Consider receiving a PAGE_DATA packet formed of {some 4k, 2M, more 4k}, which can occur from the 2nd pass onwards. You do not know that the mfn at the end of the input list was part of a superpage, and therefore safe to clobber. I expect this works in practice because the first pass is always aligned, and subsequent passes are astronomically unlikely to have a full 2M be dirty. > + > + prev = pfns[i]; > + } > + > + count = original_count; > + > + /* No superpages found */ > + if ( indexes == indexes_end ) > + return count; > + num_superpages = indexes_end - indexes; > + > + /* Build list of PFNs that will be updated with MFNs */ > + mfns = indexes - num_superpages; > + for ( i = 0; i < num_superpages; ++i ) > + mfns[i] = pfns[indexes[i]]; > + > + /* Try to allocate, fallback to single pages */ > + if ( xc_domain_populate_physmap_exact( > + ctx->xch, ctx->domid, num_superpages, order, 0, mfns) ) > + return count; > + > + /* Scan all MFNs allocated */ > + for ( i = 0; i < num_superpages; ++i ) > + { > + const xen_pfn_t mfn = mfns[i]; > + const xen_pfn_t pfn = pfns[indexes[i]]; > + > + /* Check valid */ > + if ( mfn == INVALID_MFN ) > + continue; > + > + /* Update PFNs using callback */ > + for ( size_t j = 0; j <= order_mask; ++j ) > + ctx->restore.ops.set_gfn(ctx, pfn + j, mfn + j); > + > + /* remove from 4kb pages list */ > + count -= order_mask + 1; > + memmove(pfns + indexes[i], pfns + indexes[i] + order_mask + 1, > + sizeof(*pfns) * (count - indexes[i])); This in particular is horrible to follow, and is double processing the data that ... > + } > + return count; > +} > + > /* > * Given a set of pfns, obtain memory from Xen to fill the physmap for the > * unpopulated subset. If types is NULL, no page type checking is performed > @@ -163,6 +237,9 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int > count, ... was set up just up here. Have this loop scan forwards to pick out superpages, and deal with them without putting them into the 4k list. Don't even worry about trying to collapse 2 hypercalls into 1; that's a marginal optimisation and the flamegraphs showed that these hypercalls didn't even register compared to the other overheads. It will be a simpler patch, and easier to follow. ~Andrew