A little more on this (since affinity is one of my favorite topics of late :-) 
).  See my blog entries about what we just did in the 1.7 branch (and SVN 
trunk):

http://blogs.cisco.com/performance/taking-mpi-process-affinity-to-the-next-level/
http://blogs.cisco.com/performance/process-affinity-in-ompi-v1-7-part-1/
http://blogs.cisco.com/performance/process-affinity-in-ompi-v1-7-part-2/

As Ralph said, the v1.6 series will allow you to bind processes to the entire 
core (i.e., all hyperthreads in a core), or to an entire socket (i.e., all 
hyperthreads in a socket).

The v1.7 series will be quite a bit more flexible in its affinity options (note 
that the "Expert" mode described in my blog posting will be coming in v1.7.1 -- 
if you want to try that now, you'll need to use the SVN trunk).

For example:

mpirun --report-bindings --map-by core --bind-to hwthread ...

Should give you the pattern you want.  Note that it looks like we have a bug in 
this pattern at the moment, however -- you'll need to use the SVN trunk and the 
"lama" mapper to get the patterns you want.  The following example is running 
on a sandy bridge server with 2 sockets, each with 8 cores, each with 2 
hyperthreads:

One hyperthread per core:

-----
% mpirun --mca rmaps lama -np 4 --host svbu-mpi058 --report-bindings --map-by 
core --bind-to hwthread uptime
[svbu-mpi058:23916] MCW rank 0 bound to socket 0[core 0[hwt 0]]: 
[B./../../../../../../..][../../../../../../../..]
[svbu-mpi058:23916] MCW rank 1 bound to socket 0[core 1[hwt 0]]: 
[../B./../../../../../..][../../../../../../../..]
[svbu-mpi058:23916] MCW rank 2 bound to socket 0[core 2[hwt 0]]: 
[../../B./../../../../..][../../../../../../../..]
[svbu-mpi058:23916] MCW rank 3 bound to socket 0[core 3[hwt 0]]: 
[../../../B./../../../..][../../../../../../../..]
 06:48:51 up 1 day, 12:08,  0 users,  load average: 0.00, 0.01, 0.05
 06:48:51 up 1 day, 12:08,  0 users,  load average: 0.00, 0.01, 0.05
 06:48:51 up 1 day, 12:08,  0 users,  load average: 0.00, 0.01, 0.05
 06:48:51 up 1 day, 12:08,  0 users,  load average: 0.00, 0.01, 0.05
-----

Both hyperthreads per core:

-----
% mpirun --mca rmaps lama -np 4 --host svbu-mpi058 --report-bindings --map-by 
core --bind-to core uptime
[svbu-mpi058:23951] MCW rank 0 bound to socket 0[core 0[hwt 0-1]]: 
[BB/../../../../../../..][../../../../../../../..]
[svbu-mpi058:23951] MCW rank 1 bound to socket 0[core 1[hwt 0-1]]: 
[../BB/../../../../../..][../../../../../../../..]
[svbu-mpi058:23951] MCW rank 2 bound to socket 0[core 2[hwt 0-1]]: 
[../../BB/../../../../..][../../../../../../../..]
[svbu-mpi058:23951] MCW rank 3 bound to socket 0[core 3[hwt 0-1]]: 
[../../../BB/../../../..][../../../../../../../..]
 06:48:57 up 1 day, 12:09,  0 users,  load average: 0.00, 0.01, 0.05
 06:48:57 up 1 day, 12:09,  0 users,  load average: 0.00, 0.01, 0.05
 06:48:57 up 1 day, 12:09,  0 users,  load average: 0.00, 0.01, 0.05
 06:48:57 up 1 day, 12:09,  0 users,  load average: 0.00, 0.01, 0.05
------



On Sep 12, 2012, at 8:10 AM, John R. Cary wrote:

> Thanks!
> 
> John
> 
> On 9/12/12 8:05 AM, Ralph Castain wrote:
>> On Sep 12, 2012, at 4:57 AM, "John R. Cary" <c...@txcorp.com> wrote:
>> 
>>> I do want in fact to bind first to one HT of each core
>>> before binding to two HTs of one core.  So that will
>>> be possible in 1.7?
>> Yes - you can get a copy of the 1.7 nightly tarball and experiment with it 
>> in advance, if you like. You'll want
>> 
>> mpirun --map-by core --bind-to hwthread ....
>> 
>> Add --report-bindings to see what happens, but I believe that will do what 
>> you want. You'll map one process to each core, and bind it to only the first 
>> hwthread on that core.
>> 
>> Let me know either way - if it doesn't, we have time to adjust it.
>> 
>>> Thx....John
>>> 
>>> On 9/11/12 11:19 PM, Ralph Castain wrote:
>>>> Not entirely sure I know what you mean. If you are talking about running 
>>>> without specifying binding, then it makes no difference - we'll run 
>>>> wherever the OS puts us, so you would need to tell the OS not to use the 
>>>> virtual cores (i.e., disable HT).
>>>> 
>>>> If you are talking about binding, then pre-1.7 releases all bind to core 
>>>> at the lowest level. On a hyperthread-enabled machine, that binds you to 
>>>> both HT's of a core. Starting with the upcoming 1.7 release, you can bind 
>>>> to the separate HTs, but that doesn't sound like something you want to do.
>>>> 
>>>> HTH
>>>> Ralph
>>>> 
>>>> 
>>>> On Sep 11, 2012, at 6:34 PM, John R. Cary <c...@txcorp.com> wrote:
>>>> 
>>>>> Our code gets little benefit from using virtual cores (hyperthreading),
>>>>> so when we run with mpiexec on an 8 real plus 8 virtual machine, we
>>>>> would like to be certain that it uses only the 8 real cores.
>>>>> 
>>>>> Is there a way to do this with openmpi?
>>>>> 
>>>>> Thx....John
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> 
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to