Support for the p->numa_policy affinity tracking by the scheduler went missing during the mm/ unification: revive and integrate it properly.
( This in particular fixes NUMA_POLICY_MANYBUDDIES, which bug caused a few regressions in various workloads such as numa01 and regressed !THP workloads in particular. ) Cc: Linus Torvalds <torva...@linux-foundation.org> Cc: Andrew Morton <a...@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijls...@chello.nl> Cc: Andrea Arcangeli <aarca...@redhat.com> Cc: Rik van Riel <r...@redhat.com> Cc: Mel Gorman <mgor...@suse.de> Cc: Hugh Dickins <hu...@google.com> Signed-off-by: Ingo Molnar <mi...@kernel.org> --- mm/mempolicy.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 2f2095c..6bb9fd0 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -121,8 +121,10 @@ static struct mempolicy default_policy_local = { static struct mempolicy *default_policy(void) { #ifdef CONFIG_NUMA_BALANCING - if (task_numa_shared(current) == 1) - return ¤t->numa_policy; + struct mempolicy *pol = ¤t->numa_policy; + + if (task_numa_shared(current) == 1 && nodes_weight(pol->v.nodes) >= 2) + return pol; #endif return &default_policy_local; } @@ -135,6 +137,11 @@ static struct mempolicy *get_task_policy(struct task_struct *p) int node; if (!pol) { +#ifdef CONFIG_NUMA_BALANCING + pol = default_policy(); + if (pol != &default_policy_local) + return pol; +#endif node = numa_node_id(); if (node != -1) pol = &preferred_node_policy[node]; @@ -2367,7 +2374,8 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long shift = PAGE_SHIFT; target_node = interleave_nid(pol, vma, addr, shift); - break; + + goto out_keep_page; } case MPOL_PREFERRED: -- 1.7.11.7 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/