On 03/01/2017 11:27 AM, Jan Beulich wrote:
>>>> On 01.03.17 at 17:14, <boris.ostrov...@oracle.com> wrote:
>> On 03/01/2017 10:48 AM, George Dunlap wrote:
>>> On 27/02/17 17:06, Boris Ostrovsky wrote:
>>>> Since dirty pages are always at the tail of page lists we are not really
>>>> searching the lists. As soon as a clean page is found (starting from the
>>>> tail) we can stop.
>>> Sure, having a back and a front won't add significant overhead; but it
>>> does make things a bit strange.  What does it buy us over having two lists?
>> If we implement dirty heap just like we do regular heap (i.e.
>> node/zone/order) that datastructure is almost a megabyte under current
>> assumptions (i.e. sizeof(page_list_head) * MAX_NUMNODES * NR_ZONES *
>> (MAX_ORDER+1) = 16 * 41 * 21 * 64 = 881664).
> Furthermore I'd be afraid for this to move us further away from
> being able to recombine higher order buddies early.

Possibly, although we would still be combining within each of the two heaps.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to