On Wed, 28 Dec 2016 15:08:42 -0200 Eduardo Habkost <ehabk...@redhat.com> wrote:
> On Fri, Nov 18, 2016 at 12:02:54PM +0100, Igor Mammedov wrote: > > so it won't impose an additional limits on max_cpus limits > > supported by different targets. > > > > It removes global MAX_CPUMASK_BITS constant and need to > > bump it up whenever max_cpus is being increased for > > a target above MAX_CPUMASK_BITS value. > > > > Use runtime max_cpus value instead to allocate sufficiently > > sized node_cpu bitmasks in numa parser. > > > > Signed-off-by: Igor Mammedov <imamm...@redhat.com> > > Reviewed-by: Eduardo Habkost <ehabk...@redhat.com> > > As the cpu_index assignment code isn't obviously safe against > setting cpu_index > max_cpus, I would like to squash this into > the patch. Is that OK for you? Go ahead, change looks good to me. > > diff --git a/numa.c b/numa.c > index 1b6fa78..33f2fd4 100644 > --- a/numa.c > +++ b/numa.c > @@ -401,6 +401,7 @@ void numa_post_machine_init(void) > > CPU_FOREACH(cpu) { > for (i = 0; i < nb_numa_nodes; i++) { > + assert(cpu->cpu_index < max_cpus); > if (test_bit(cpu->cpu_index, numa_info[i].node_cpu)) { > cpu->numa_node = i; > } > @@ -559,6 +560,8 @@ int numa_get_node_for_cpu(int idx) > { > int i; > > + assert(idx < max_cpus); > + > for (i = 0; i < nb_numa_nodes; i++) { > if (test_bit(idx, numa_info[i].node_cpu)) { > break; >