On 1/31/19 2:55 PM, Vlastimil Babka wrote:
> On 1/18/19 6:51 PM, Mel Gorman wrote:
> ...
> 
>> +    for (order = cc->order - 1;
>> +         order >= PAGE_ALLOC_COSTLY_ORDER && pfn == cc->migrate_pfn && 
>> nr_scanned < limit;
>> +         order--) {
>> +            struct free_area *area = &cc->zone->free_area[order];
>> +            struct list_head *freelist;
>> +            unsigned long flags;
>> +            struct page *freepage;
>> +
>> +            if (!area->nr_free)
>> +                    continue;
>> +
>> +            spin_lock_irqsave(&cc->zone->lock, flags);
>> +            freelist = &area->free_list[MIGRATE_MOVABLE];
>> +            list_for_each_entry(freepage, freelist, lru) {
>> +                    unsigned long free_pfn;
>> +
>> +                    nr_scanned++;
>> +                    free_pfn = page_to_pfn(freepage);
>> +                    if (free_pfn < high_pfn) {
>> +                            update_fast_start_pfn(cc, free_pfn);
> 
> Shouldn't this update go below checking pageblock skip bit? We might be
> caching pageblocks that will be skipped, and also potentially going

Ah that move happens in the next patch.

Reply via email to