On 02.07.2025 14:34, Oleksii Kurochko wrote:
> 
> On 7/2/25 1:56 PM, Jan Beulich wrote:
>> On 02.07.2025 13:48, Oleksii Kurochko wrote:
>>> On 7/1/25 3:04 PM, Jan Beulich wrote:
>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>>> @@ -113,3 +117,58 @@ int p2m_init(struct domain *d)
>>>>>    
>>>>>        return 0;
>>>>>    }
>>>>> +
>>>>> +/*
>>>>> + * Set the pool of pages to the required number of pages.
>>>>> + * Returns 0 for success, non-zero for failure.
>>>>> + * Call with d->arch.paging.lock held.
>>>>> + */
>>>>> +int p2m_set_allocation(struct domain *d, unsigned long pages, bool 
>>>>> *preempted)
>>>>> +{
>>>>> +    struct page_info *pg;
>>>>> +
>>>>> +    ASSERT(spin_is_locked(&d->arch.paging.lock));
>>>>> +
>>>>> +    for ( ; ; )
>>>>> +    {
>>>>> +        if ( d->arch.paging.p2m_total_pages < pages )
>>>>> +        {
>>>>> +            /* Need to allocate more memory from domheap */
>>>>> +            pg = alloc_domheap_page(d, MEMF_no_owner);
>>>>> +            if ( pg == NULL )
>>>>> +            {
>>>>> +                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
>>>>> +                return -ENOMEM;
>>>>> +            }
>>>>> +            ACCESS_ONCE(d->arch.paging.p2m_total_pages)++;
>>>>> +            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
>>>>> +        }
>>>>> +        else if ( d->arch.paging.p2m_total_pages > pages )
>>>>> +        {
>>>>> +            /* Need to return memory to domheap */
>>>>> +            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>>>> +            if( pg )
>>>>> +            {
>>>>> +                ACCESS_ONCE(d->arch.paging.p2m_total_pages)--;
>>>>> +                free_domheap_page(pg);
>>>>> +            }
>>>>> +            else
>>>>> +            {
>>>>> +                printk(XENLOG_ERR
>>>>> +                       "Failed to free P2M pages, P2M freelist is 
>>>>> empty.\n");
>>>>> +                return -ENOMEM;
>>>>> +            }
>>>>> +        }
>>>>> +        else
>>>>> +            break;
>>>>> +
>>>>> +        /* Check to see if we need to yield and try again */
>>>>> +        if ( preempted && general_preempt_check() )
>>>>> +        {
>>>>> +            *preempted = true;
>>>>> +            return -ERESTART;
>>>>> +        }
>>>>> +    }
>>>>> +
>>>>> +    return 0;
>>>>> +}
>>>> Btw, with the order-2 requirement for the root page table, you may want to
>>>> consider an alternative approach: Here you could allocate some order-2
>>>> pages (possibly up to as many as a domain might need, which right now
>>>> would be exactly one), put them on a separate list, and consume the root
>>>> table(s) from there. If you run out of pages on the order-0 list, you
>>>> could shatter a page from the order-2 one (as long as that's still non-
>>>> empty). The difficulty would be with freeing, where a previously shattered
>>>> order-2 page would be nice to re-combine once all of its constituents are
>>>> free again.
>>> Do we really need to re-combine shattered order-2 pages?
>>> It seems like the only usage for this order-2-list is to have 1 order-2 page
>>> for root page table. All other pages are 4k pages so even if we won't 
>>> re-combine
>>> them, nothing serious will happen.
>> That's true as long as you have only the host-P2M for each domain. Once you
>> have alternative or nested ones, things may change (unless they all have
>> their roots also set up right during domain creation, which would seem
>> wasteful to me).
> 
> I don't know how it is implemented on x86, but I thought that if it is needed 
> alternative
> or nested P2Ms then it is needed to provide separated from host-P2M page 
> tables (root page
> table including).

Correct, hence why you will then need to allocate multiple root tables.
Those secondary page tables are nevertheless all allocated from the
single pool that a domain has.

Jan

Reply via email to