No need to recompute in case the zone is already marked contiguous. We will soon exploit this on the memory removal path, where we will only clear zone->contiguous on zones that intersect with the memory to be removed.
Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Pavel Tatashin <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Dan Williams <[email protected]> Cc: Alexander Duyck <[email protected]> Signed-off-by: David Hildenbrand <[email protected]> --- mm/page_alloc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5b799e11fba3..995708e05cde 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1546,6 +1546,9 @@ void set_zone_contiguous(struct zone *zone) unsigned long block_start_pfn = zone->zone_start_pfn; unsigned long block_end_pfn; + if (zone->contiguous) + return; + block_end_pfn = ALIGN(block_start_pfn + 1, pageblock_nr_pages); for (; block_start_pfn < zone_end_pfn(zone); block_start_pfn = block_end_pfn, -- 2.21.0

