Hello, On Mon, Aug 10, 2020 at 3:22 PM Alexander Gordeev <agord...@linux.ibm.com> wrote: > > It is currently assumed that each node contains at most > nr_cpus/nr_nodes CPUs and node CPU ranges do not overlap. > That assumption is generally incorrect as there are archs > where a CPU number does not depend on to its node number. > > This update removes the described assumption by simply calling > numa_node_to_cpus() interface and using the returned mask for > binding CPUs to nodes. It also tightens a cpumask allocation > failure check a bit. > > Cc: Satheesh Rajendran <sathn...@linux.vnet.ibm.com> > Cc: Srikar Dronamraju <sri...@linux.vnet.ibm.com> > Cc: Naveen N. Rao <naveen.n....@linux.vnet.ibm.com> > Cc: Balamuruhan S <bal...@linux.vnet.ibm.com> > Cc: Peter Zijlstra <pet...@infradead.org> > Cc: Ingo Molnar <mi...@redhat.com> > Cc: Arnaldo Carvalho de Melo <a...@kernel.org> > Cc: Mark Rutland <mark.rutl...@arm.com> > Cc: Alexander Shishkin <alexander.shish...@linux.intel.com> > Cc: Jiri Olsa <jo...@redhat.com> > Cc: Namhyung Kim <namhy...@kernel.org> > Signed-off-by: Alexander Gordeev <agord...@linux.ibm.com> > --- > tools/perf/bench/numa.c | 27 +++++++++++++-------------- > 1 file changed, 13 insertions(+), 14 deletions(-) > > diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c > index 5797253..23e224e 100644 > --- a/tools/perf/bench/numa.c > +++ b/tools/perf/bench/numa.c > @@ -247,12 +247,13 @@ static int is_node_present(int node) > */ > static bool node_has_cpus(int node) > { > - struct bitmask *cpu = numa_allocate_cpumask(); > + struct bitmask *cpumask = numa_allocate_cpumask(); > unsigned int i; > > - if (cpu && !numa_node_to_cpus(node, cpu)) { > - for (i = 0; i < cpu->size; i++) { > - if (numa_bitmask_isbitset(cpu, i)) > + BUG_ON(!cpumask); > + if (!numa_node_to_cpus(node, cpumask)) { > + for (i = 0; i < cpumask->size; i++) { > + if (numa_bitmask_isbitset(cpumask, i)) > return true; > } > } > @@ -288,14 +289,10 @@ static cpu_set_t bind_to_cpu(int target_cpu) > > static cpu_set_t bind_to_node(int target_node) > { > - int cpus_per_node = g->p.nr_cpus / nr_numa_nodes(); > cpu_set_t orig_mask, mask; > int cpu; > int ret; > > - BUG_ON(cpus_per_node * nr_numa_nodes() != g->p.nr_cpus); > - BUG_ON(!cpus_per_node); > - > ret = sched_getaffinity(0, sizeof(orig_mask), &orig_mask); > BUG_ON(ret); > > @@ -305,13 +302,15 @@ static cpu_set_t bind_to_node(int target_node) > for (cpu = 0; cpu < g->p.nr_cpus; cpu++) > CPU_SET(cpu, &mask); > } else { > - int cpu_start = (target_node + 0) * cpus_per_node; > - int cpu_stop = (target_node + 1) * cpus_per_node; > - > - BUG_ON(cpu_stop > g->p.nr_cpus); > + struct bitmask *cpumask = numa_allocate_cpumask(); > > - for (cpu = cpu_start; cpu < cpu_stop; cpu++) > - CPU_SET(cpu, &mask); > + BUG_ON(!cpumask); > + if (!numa_node_to_cpus(target_node, cpumask)) { > + for (cpu = 0; cpu < (int)cpumask->size; cpu++) { > + if (numa_bitmask_isbitset(cpumask, cpu)) > + CPU_SET(cpu, &mask); > + } > + }
It seems you need to call numa_free_cpumask() for both functions. Thanks Namhyung > } > > ret = sched_setaffinity(0, sizeof(mask), &mask); > -- > 1.8.3.1 >