Srikar Dronamraju writes:
>> Regardless, I have an annoying question :-) Isn't it possible that,
>> while Linux is calling vphn_get_nid() for each logical cpu in sequence,
>> the platform could change a virtual processor's node assignment,
>> potentially causing sibling threads to get different no
> > Regardless, I have an annoying question :-) Isn't it possible that,
> > while Linux is calling vphn_get_nid() for each logical cpu in sequence,
> > the platform could change a virtual processor's node assignment,
> > potentially causing sibling threads to get different node assignments
> > and
> >
> > While here, fix a problem where of_node_put could be called even when
> > of_get_cpu_node was not successful.
>
> of_node_put() handles NULL arguments, so this should not be necessary.
>
Ok
> > @@ -875,7 +908,7 @@ void __init mem_topology_setup(void)
> > reset_numa_cpu_lookup_table
Hi Srikar,
Srikar Dronamraju writes:
> Currently the kernel detects if its running on a shared lpar platform
> and requests home node associativity before the scheduler sched_domains
> are setup. However between the time NUMA setup is initialized and the
> request for home node associativity, wor
Currently the kernel detects if its running on a shared lpar platform
and requests home node associativity before the scheduler sched_domains
are setup. However between the time NUMA setup is initialized and the
request for home node associativity, workqueue initializes its per node
cpumask. The pe