Yes, sadly the terminology is badly overloaded at this stage :-(

> On Jul 18, 2016, at 9:20 AM, John Hearns <hear...@googlemail.com> wrote:
> 
> Thankyou Ralph.   i guess the information I did not have in my head was that  
>  core = physical core (not hyperthreaded core)
> 
> On 18 July 2016 at 14:45, Ralph Castain <r...@open-mpi.org 
> <mailto:r...@open-mpi.org>> wrote:
> It sounds like you just want to bind procs to cores since each core is 
> composed of 2 HTs. So a simple “--map-by core --bind-to core" should do the 
> trick.
> 
> FWIW: the affinity settings are controlled by the bind-to <foo> option. You 
> can use “mpirun -h”  to get the list of supported options and a little 
> explanation:
> 
> --bind-to <foo>
> Bind processes to the specified object, defaults to core. Supported options 
> include slot, hwthread, core, l1cache, l2cache, l3cache, socket, numa, board, 
> and none.
> 
> https://www.open-mpi.org/doc/current/man1/mpirun.1.php#sect9 
> <https://www.open-mpi.org/doc/current/man1/mpirun.1.php#sect9>
> 
> 
> 
> 
>> On Jul 17, 2016, at 11:25 PM, John Hearns <hear...@googlemail.com 
>> <mailto:hear...@googlemail.com>> wrote:
>> 
>> Please can someone point me towards the affinity settings for:
>> OpenMPI 1.10   used with Slurm version 15
>> 
>> I have some nodes with 2630-v4 processors.
>> So 10 cores per socket / 20 hyperthreads
>> Hyperthreading is enabled.
>> I would like to set affinity for 20 processes per node,
>> so that the processes are pinned to every second HT core - ie one process 
>> per physical thread.
>> 
>> I'm sure this is quite easy...
>> 
>> Thankyou
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
>> <https://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/07/29674.php 
>> <http://www.open-mpi.org/community/lists/users/2016/07/29674.php>
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org <mailto:us...@open-mpi.org>
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
> <https://www.open-mpi.org/mailman/listinfo.cgi/users>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/07/29676.php 
> <http://www.open-mpi.org/community/lists/users/2016/07/29676.php>
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/07/29677.php

Reply via email to