We don't need to check whether the node is memoryless numa node before
calling allocator interface. SLUB(and SLAB,SLOB) relies on the page
allocator to pick a node. Page allocator should deal with memoryless
nodes just fine. It has zonelists constructed for each possible nodes.
And it will automatically fall back into a node which is closest to the
requested node. As long as __GFP_THISNODE is not enforced of course.

The code comments of kmem_cache_alloc_node() of SLAB also showed this:
 * Fallback to other node is possible if __GFP_THISNODE is not set.

blk-mq code doesn't set __GFP_THISNODE, so we can remove the calling
of local_memory_node().

Fixes: bffed457160ab ("blk-mq: Avoid memoryless numa node encoded in hctx 
numa_node")

Signed-off-by: Xianting Tian <tian.xiant...@h3c.com>
---
 block/blk-mq-cpumap.c | 2 +-
 block/blk-mq.c        | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 0157f2b34..3db84d319 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -89,7 +89,7 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, 
unsigned int index)
 
        for_each_possible_cpu(i) {
                if (index == qmap->mq_map[i])
-                       return local_memory_node(cpu_to_node(i));
+                       return cpu_to_node(i);
        }
 
        return NUMA_NO_NODE;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index cdced4aca..48f8366b2 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2737,7 +2737,7 @@ static void blk_mq_init_cpu_queues(struct request_queue 
*q,
                for (j = 0; j < set->nr_maps; j++) {
                        hctx = blk_mq_map_queue_type(q, j, i);
                        if (nr_hw_queues > 1 && hctx->numa_node == NUMA_NO_NODE)
-                               hctx->numa_node = 
local_memory_node(cpu_to_node(i));
+                               hctx->numa_node = cpu_to_node(i);
                }
        }
 }
-- 
2.17.1

Reply via email to