On Wed, 1 Aug 2018, Jeremy Linton wrote:

> diff --git a/mm/slub.c b/mm/slub.c
> index 51258eff4178..e03719bac1e2 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2519,6 +2519,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t 
> gfpflags, int node,
>               if (unlikely(!node_match(page, searchnode))) {
>                       stat(s, ALLOC_NODE_MISMATCH);
>                       deactivate_slab(s, page, c->freelist, c);
> +                     if (!node_online(searchnode))
> +                             node = NUMA_NO_NODE;
>                       goto new_slab;
>               }
>       }
>

Would it not be better to implement this check in the page allocator?
There is also the issue of how to fallback to the nearest node.

NUMA_NO_NODE should fallback to the current memory allocation policy but
it seems by inserting it here you would end up just with the default node
for the processor.

Reply via email to