On 28.04.2024 18:52, Petr Beneš wrote: > From: Petr Beneš <w1be...@gmail.com> > > This change anticipates scenarios where `max_altp2m` is set to its maximum > supported value (i.e., 512), ensuring sufficient memory is allocated upfront > to accommodate all altp2m tables without initialization failure.
And guests with fewer or even no altp2m-s still need the same bump? You know the number of altp2m-s upon domain creation, so why bump by any more than what's strictly needed for that? > The necessity for this increase arises from the current mechanism where altp2m > tables are allocated at initialization, requiring one page from the mempool > for each altp2m view. So that's the p2m_alloc_table() out of hap_enable()? If you're permitting up to 512 altp2m-s, I think it needs considering to not waste up to 2Mb without knowing how many of the altp2m-s are actually going to be used. How complicate on-demand allocation would be I can't tell though, I have to admit. > --- a/tools/tests/paging-mempool/test-paging-mempool.c > +++ b/tools/tests/paging-mempool/test-paging-mempool.c > @@ -35,7 +35,7 @@ static struct xen_domctl_createdomain create = { > > static uint64_t default_mempool_size_bytes = > #if defined(__x86_64__) || defined(__i386__) > - 256 << 12; /* Only x86 HAP for now. x86 Shadow needs more work. */ > + 1024 << 12; /* Only x86 HAP for now. x86 Shadow needs more work. */ I also can't derive from the description why we'd need to go from 256 to 1024 here and ... > --- a/xen/arch/x86/mm/hap/hap.c > +++ b/xen/arch/x86/mm/hap/hap.c > @@ -468,7 +468,7 @@ int hap_enable(struct domain *d, u32 mode) > if ( old_pages == 0 ) > { > paging_lock(d); > - rv = hap_set_allocation(d, 256, NULL); > + rv = hap_set_allocation(d, 1024, NULL); ... here. You talk of (up to) 512 pages there only. Also isn't there at least one more place where the tool stack (libxl I think) would need changing, where Dom0 ballooning needs are calculated? And/or doesn't the pool size have a default calculation in the tool stack, too? Jan