On 02.07.2025 12:30, Oleksii Kurochko wrote:
> 
> On 7/1/25 3:04 PM, Jan Beulich wrote:
>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>> @@ -113,3 +117,58 @@ int p2m_init(struct domain *d)
>>>   
>>>       return 0;
>>>   }
>>> +
>>> +/*
>>> + * Set the pool of pages to the required number of pages.
>>> + * Returns 0 for success, non-zero for failure.
>>> + * Call with d->arch.paging.lock held.
>>> + */
>>> +int p2m_set_allocation(struct domain *d, unsigned long pages, bool 
>>> *preempted)
>>> +{
>>> +    struct page_info *pg;
>>> +
>>> +    ASSERT(spin_is_locked(&d->arch.paging.lock));
>>> +
>>> +    for ( ; ; )
>>> +    {
>>> +        if ( d->arch.paging.p2m_total_pages < pages )
>>> +        {
>>> +            /* Need to allocate more memory from domheap */
>>> +            pg = alloc_domheap_page(d, MEMF_no_owner);
>>> +            if ( pg == NULL )
>>> +            {
>>> +                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
>>> +                return -ENOMEM;
>>> +            }
>>> +            ACCESS_ONCE(d->arch.paging.p2m_total_pages)++;
>>> +            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
>>> +        }
>>> +        else if ( d->arch.paging.p2m_total_pages > pages )
>>> +        {
>>> +            /* Need to return memory to domheap */
>>> +            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>> +            if( pg )
>>> +            {
>>> +                ACCESS_ONCE(d->arch.paging.p2m_total_pages)--;
>>> +                free_domheap_page(pg);
>>> +            }
>>> +            else
>>> +            {
>>> +                printk(XENLOG_ERR
>>> +                       "Failed to free P2M pages, P2M freelist is 
>>> empty.\n");
>>> +                return -ENOMEM;
>>> +            }
>>> +        }
>>> +        else
>>> +            break;
>>> +
>>> +        /* Check to see if we need to yield and try again */
>>> +        if ( preempted && general_preempt_check() )
>>> +        {
>>> +            *preempted = true;
>>> +            return -ERESTART;
>>> +        }
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>> Btw, with the order-2 requirement for the root page table, you may want to
>> consider an alternative approach: Here you could allocate some order-2
>> pages (possibly up to as many as a domain might need, which right now
>> would be exactly one), put them on a separate list, and consume the root
>> table(s) from there. If you run out of pages on the order-0 list, you
>> could shatter a page from the order-2 one (as long as that's still non-
>> empty). The difficulty would be with freeing, where a previously shattered
>> order-2 page would be nice to re-combine once all of its constituents are
>> free again. The main benefit would be avoiding the back and forth in patch
>> 6.
> 
> It is an option.
> 
> But I'm still not 100% sure it's necessary to allocate the root page table
> from the freelist. We could simply allocate the root page table from the
> domheap (as is done for hardware domains) and reserve the freelist for other
> pages.
> The freelist is specific to Dom0less guest domains and is primarily used to
> limit the amount of memory available for the guest—essentially for static
> configurations where you want a clear and fixed limit on p2m allocations.

Is that true? My understanding is that this pre-populated pool is used by
all DomU-s, whether or not under dom0less.

Plus we're meaning to move towards better accounting of memory used by a
domain (besides its actual allocation). Allocating the root table from the
domain heap would move us one small step farther away from there.

Jan

Reply via email to