Christopher Lameter <c...@linux.com> writes:

> On Mon, 7 Aug 2017, Huang, Ying wrote:
>
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4374,9 +4374,31 @@ void clear_huge_page(struct page *page,
>>      }
>>
>>      might_sleep();
>> -    for (i = 0; i < pages_per_huge_page; i++) {
>> +    VM_BUG_ON(clamp(addr_hint, addr, addr +
>> +                    (pages_per_huge_page << PAGE_SHIFT)) != addr_hint);
>> +    n = (addr_hint - addr) / PAGE_SIZE;
>> +    if (2 * n <= pages_per_huge_page) {
>> +            base = 0;
>> +            l = n;
>> +            for (i = pages_per_huge_page - 1; i >= 2 * n; i--) {
>> +                    cond_resched();
>> +                    clear_user_highpage(page + i, addr + i * PAGE_SIZE);
>> +            }
>
> I really like the idea behind the patch but this is not clearing from last
> to first byte of the huge page.
>
> What seems to be happening here is clearing from the last page to the
> first page and I would think that within each page the clearing is from
> first byte to last byte. Maybe more gains can be had by really clearing
> from last to first byte of the huge page instead of this jumping over 4k
> addresses?

Yes.  That is a good idea.  I will experiment it via changing the
direction to clear in clear_user_highpage().

Best Regards,
Huang, Ying

Reply via email to