On 06/01/2016 03:26 PM, Michal Hocko wrote:
> On Tue 31-05-16 15:08:03, Vlastimil Babka wrote:
> [...]
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index da3a62a94b4a..9f83259a18a8 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -3367,10 +3367,9 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned 
>> int order,
>>      bool drained = false;
>>  
>>      *did_some_progress = __perform_reclaim(gfp_mask, order, ac);
>> -    if (unlikely(!(*did_some_progress)))
>> -            return NULL;
>>  
>>  retry:
>> +    /* We attempt even when no progress, as kswapd might have done some */
>>      page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
> 
> Is this really likely to happen, though? Sure we might have last few
> reclaimable pages on the LRU lists but I am not sure this would make a
> large difference then.
> 
> That being said, I do not think this is harmful but I find it a bit
> weird to invoke a reclaim and then ignore the feedback... Will leave the
> decision up to you but the original patch seemed neater.

OK, I'll think about it.

>>  
>>      /*
>> @@ -3378,7 +3377,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned 
>> int order,
>>       * pages are pinned on the per-cpu lists or in high alloc reserves.
>>       * Shrink them them and try again
>>       */
>> -    if (!page && !drained) {
>> +    if (!page && *did_some_progress && !drained) {
>>              unreserve_highatomic_pageblock(ac);
>>              drain_all_pages(NULL);
>>              drained = true;
> 
> I do not remember this in the previous version.

Because it's consequence of the new hunk above.

> Why shouldn't we
> unreserve highatomic reserves when there was no progress?

Previously the "return NULL" for no progress would also skip this. So I
wanted to change just the get_page_from_freelist() part. IIUC the
reasoning here is that if there was reclaim progress but we didn't
succeed getting the page, it can mean it's stuck on per-cpu or reserve.
If there was no progress, it's unlikely that anything is stuck there.

Reply via email to