> On Nov 21, 2017, at 8:53 AM, r...@open-mpi.org wrote:
> 
>> On Nov 21, 2017, at 5:34 AM, Noam Bernstein <noam.bernst...@nrl.navy.mil 
>> <mailto:noam.bernst...@nrl.navy.mil>> wrote:
>> 
>>> 
>>> On Nov 20, 2017, at 7:02 PM, r...@open-mpi.org <mailto:r...@open-mpi.org> 
>>> wrote:
>>> 
>>> So there are two options here that will work and hopefully provide you with 
>>> the desired pattern:
>>> 
>>> * if you want the procs to go in different NUMA regions:
>>> $ mpirun --map-by numa:PE=2 --report-bindings -n 2 /bin/true
>>> [rhc001:131460] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 
>>> 0[core 1[hwt 0-1]]: 
>>> [BB/BB/../../../../../../../../../..][../../../../../../../../../../../..]
>>> [rhc001:131460] MCW rank 1 bound to socket 1[core 12[hwt 0-1]], socket 
>>> 1[core 13[hwt 0-1]]: 
>>> [../../../../../../../../../../../..][BB/BB/../../../../../../../../../..]
>>> 
>>> * if you want the procs to go in the same NUMA region:
>>> $ mpirun --map-by ppr:2:numa:PE=2 --report-bindings -n 2 /bin/true
>>> [rhc001:131559] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 
>>> 0[core 1[hwt 0-1]]: 
>>> [BB/BB/../../../../../../../../../..][../../../../../../../../../../../..]
>>> [rhc001:131559] MCW rank 1 bound to socket 0[core 2[hwt 0-1]], socket 
>>> 0[core 3[hwt 0-1]]: 
>>> [../../BB/BB/../../../../../../../..][../../../../../../../../../../../..]
>>> 
>>> Reason: the level you are mapping by (e.g., NUMA) must have enough cores in 
>>> it to meet your PE=N directive. If you map by core, then there is only one 
>>> core in that object.
>> 
>> Makes sense.  I’ll try that.  However, if I understand your explanation 
>> correctly the docs should probably be changed, because they seem to be 
>> suggesting something that will never work.   In fact, would the ":PE=N" > 1 
>> ever work for "—map-by core”?  I guess maybe if you have hyperthreading on, 
>> but I’d still argue that that’s an unhelpful example, given how rarely 
>> hyperthreading is used in HPC.


I can now confirm that "—map-by numa:PE=2" does indeed work, and seems to give 
good performance.

                                                                        thanks,
                                                                        Noam

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to