On 07/04/2014 09:57 AM, Joonsoo Kim wrote:
If pageblock of page on pcp are isolated now, we should free it to isolate
buddy list to prevent future allocation on it. But current code doesn't
do this.
Moreover, there is a freepage counting problem on current code. Although
pageblock of page on pcp are isolated now, it could go normal buddy list,
because get_onpcp_migratetype() will return non-isolate migratetype.
get_onpcp_migratetype() is only introduced in later patch.
In this case, we should do either adding freepage count or changing
migratetype to MIGRATE_ISOLATE, but, current code do neither.
I wouldn't say it "do neither". It already limits the freepage counting
to !MIGRATE_ISOLATE case (and it's not converted to
__mod_zone_freepage_state for some reason). So there's accounting
mismatch in addition to buddy list misplacement.
This patch fixes these two problems by handling pageblock migratetype
before calling __free_one_page(). And, if we find the page on isolated
pageblock, change migratetype to MIGRATE_ISOLATE to prevent future
allocation of this page and freepage counting problem.
So although this is not an addition of a new pageblock migratetype check
to the fast path (the check is already there), I would prefer removing
the check :) With the approach of pcplists draining outlined in my reply
to 00/10, we would allow a misplacement to happen (and the page
accounted as freepage) immediately followed by move_frepages_block which
would place the page onto isolate freelist with the rest. Anything newly
freed will get isolate_migratetype determined in free_hot_cold_page or
__free_pages_ok (where it would need moving the migratepage check under
the disabled irq part) and be placed and buddy-merged properly.
Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com>
---
mm/page_alloc.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index aeb51d1..99c05f7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -719,15 +719,17 @@ static void free_pcppages_bulk(struct zone *zone, int
count,
page = list_entry(list->prev, struct page, lru);
/* must delete as __free_one_page list manipulates */
list_del(&page->lru);
- mt = get_freepage_migratetype(page);
+
+ if (unlikely(is_migrate_isolate_page(page))) {
+ mt = MIGRATE_ISOLATE;
+ } else {
+ mt = get_freepage_migratetype(page);
+ __mod_zone_freepage_state(zone, 1, mt);
+ }
+
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
__free_one_page(page, page_to_pfn(page), zone, 0, mt);
trace_mm_page_pcpu_drain(page, 0, mt);
- if (likely(!is_migrate_isolate_page(page))) {
- __mod_zone_page_state(zone, NR_FREE_PAGES, 1);
- if (is_migrate_cma(mt))
- __mod_zone_page_state(zone,
NR_FREE_CMA_PAGES, 1);
- }
} while (--to_free && --batch_free && !list_empty(list));
}
spin_unlock(&zone->lock);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/