From: "Mike Rapoport (Microsoft)" <r...@kernel.org>

Instead of looping over numa_meminfo array to detect node's start and
end addresses use get_pfn_range_for_init().

This is shorter and make it easier to lift numa_memblks to generic code.

Signed-off-by: Mike Rapoport (Microsoft) <r...@kernel.org>
Tested-by: Zi Yan <z...@nvidia.com> # for x86_64 and arm64
Reviewed-by: Jonathan Cameron <jonathan.came...@huawei.com>
Tested-by: Jonathan Cameron <jonathan.came...@huawei.com> [arm64 + CXL via QEMU]
Acked-by: Dan Williams <dan.j.willi...@intel.com>
Acked-by: David Hildenbrand <da...@redhat.com>
---
 arch/x86/mm/numa.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index edfc38803779..30b0ec801b02 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -521,17 +521,14 @@ static int __init numa_register_memblks(struct 
numa_meminfo *mi)
 
        /* Finally register nodes. */
        for_each_node_mask(nid, node_possible_map) {
-               u64 start = PFN_PHYS(max_pfn);
-               u64 end = 0;
+               unsigned long start_pfn, end_pfn;
 
-               for (i = 0; i < mi->nr_blks; i++) {
-                       if (nid != mi->blk[i].nid)
-                               continue;
-                       start = min(mi->blk[i].start, start);
-                       end = max(mi->blk[i].end, end);
-               }
-
-               if (start >= end)
+               /*
+                * Note, get_pfn_range_for_nid() depends on
+                * memblock_set_node() having already happened
+                */
+               get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
+               if (start_pfn >= end_pfn)
                        continue;
 
                alloc_node_data(nid);
-- 
2.43.0


Reply via email to