On 04/27/2016 04:57 PM, Mel Gorman wrote:
> as the patch "mm, page_alloc: inline the fast path of the zonelist iterator"
> is fine. The nodemask pointer is the same between cpuset retries. If the
> zonelist changes due to ALLOC_NO_WATERMARKS *and* it races with a cpuset
> change then there is a second harmless pass through the page allocator.

True. But I just realized (while working on direct compaction priorities)
that there's another subtle issue with the ALLOC_NO_WATERMARKS part.
According to the comment it should be ignoring mempolicies, but it still
honours ac.nodemask, and your patch is replacing NULL ac.nodemask with the
mempolicy one.

I think it's possibly easily fixed outside the fast path like this. If
you agree, consider it has my s-o-b:

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3f052bbca41d..7ccaa6e023f3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3834,6 +3834,11 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int 
order,
        alloc_mask = memalloc_noio_flags(gfp_mask);
        ac.spread_dirty_pages = false;
 
+       /*
+        * Restore the original nodemask, which might have been replaced with
+        * &cpuset_current_mems_allowed to optimize the fast-path attempt.
+        */
+       ac.nodemask = nodemask;
        page = __alloc_pages_slowpath(alloc_mask, order, &ac);
 
 no_zone:

Reply via email to