On 26.07.2022 04:57, Penny Zheng wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeul...@suse.com>
>> Sent: Monday, July 25, 2022 11:36 PM
>>
>> On 20.07.2022 07:46, Penny Zheng wrote:
>>> Today when a domain unpopulates the memory on runtime, they will
>>> always hand the memory back to the heap allocator. And it will be a
>>> problem if domain is static.
>>>
>>> Pages as guest RAM for static domain shall be reserved to only this
>>> domain and not be used for any other purposes, so they shall never go
>>> back to heap allocator.
>>>
>>> This commit puts reserved pages on the new list resv_page_list only
>>> after having taken them off the "normal" list, when the last ref dropped.
>>
>> I guess this wording somehow relates to ...
>>
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -2674,10 +2674,14 @@ void free_domstatic_page(struct page_info
>>> *page)
>>>
>>>      drop_dom_ref = !domain_adjust_tot_pages(d, -1);
>>>
>>> -    spin_unlock_recursive(&d->page_alloc_lock);
>>> -
>>>      free_staticmem_pages(page, 1, scrub_debug);
>>>
>>> +    /* Add page on the resv_page_list *after* it has been freed. */
>>> +    if ( !drop_dom_ref )
>>> +        page_list_add_tail(page, &d->resv_page_list);
>>
>> ... the conditional used here. I cannot, however, figure why there is this
>> conditional (and said part of the description also doesn't help me figure it
>> out).
>>
> 
> I was thinking that if drop_dom_ref true, then later the whole domain struct
> will be released, then there is no need to add the page to d->resv_page_list

If that was the intention, then it should have been spelled out imo. But I
think it's not a good idea to skip the list addition in that one special
case, when keeping things uniform is actually cheaper _and_ more consistent.
In the end I expect sooner or later there'll be a desire to restart DomU-s
which have crashed, which would include re-using their designated static
memory. Then you want to avoid leaking pages like it would happen with your
approach.

Jan

Reply via email to