Sure - for example, if you intend to run 4 threads, then —map-by core:pe=4 
(assuming you are running OMPI 1.10 or higher) will bind each process to 4 
cores in a disjoint pattern (i.e., no sharing).


> On Jun 22, 2016, at 3:37 AM, Gilles Gouaillardet 
> <gilles.gouaillar...@gmail.com> wrote:
> 
> my point is the way I (almost) always use it is
> export KMP_AFFINITY=compact,granularity=fine
> 
> the trick is I rely on OpenMPI and/or the batch manager to pin MPI tasks on 
> disjoint core sets.
> 
> that is obviously not the case with
> mpirun --bind-to none ...
> 
> but that can be achieved with the appropriate mpirun options
> (and I am sure Ralph will post it shortly, and it might already be in the FAQ)
> 
> Cheers,
> 
> Gilles
> 
> On Wednesday, June 22, 2016, Jeff Hammond <jeff.scie...@gmail.com 
> <mailto:jeff.scie...@gmail.com>> wrote:
>  KMP_AFFINITY is essential for performance. One just needs to set it to 
> something that distributes the threads properly. 
> 
> Not setting KMP_AFFINITY means no affinity and thus inheriting from process 
> affinity mask.
> 
> Jeff
> 
> On Wednesday, June 22, 2016, Gilles Gouaillardet <gil...@rist.or.jp <>> wrote:
> my bad, I was assuming KMP_AFFINITY was used
> 
> 
> 
> so let me put it this way :
> 
> do *not* use KMP_AFFINITY with mpirun -bind-to none, otherwise, you will very 
> likely end up doing time sharing ...
> 
> 
> 
> Cheers,
> 
> 
> 
> Gilles
> 
> 
> On 6/22/2016 5:07 PM, Jeff Hammond wrote:
>> Linux should not put more than one thread on a core if there are free cores. 
>>  Depending on cache/bandwidth needs, it may or may not be better to colocate 
>> on the same socket.
>> 
>> KMP_AFFINITY will pin the OpenMP threads.  This is often important for MKL 
>> performance.  See  
>> <https://software.intel.com/en-us/node/522691>https://software.intel.com/en-us/node/522691
>>  <https://software.intel.com/en-us/node/522691> for details.
>> 
>> Jeff
>> 
>> On Wed, Jun 22, 2016 at 9:47 AM, Gilles Gouaillardet < <>gil...@rist.or.jp 
>> <>> wrote:
>> Remi,
>> 
>> 
>> 
>> Keep in mind this is still suboptimal.
>> 
>> if you run 2 tasks per node, there is a risks threads from different ranks 
>> end up bound to the same core, which means time sharing and a drop in 
>> performance.
>> 
>> 
>> 
>> Cheers,
>> 
>> 
>> 
>> Gilles
>> 
>> 
>> On 6/22/2016 4:45 PM, remi marchal wrote:
>>> Dear Gilles,
>>> 
>>> Thanks a lot.
>>> 
>>> The mpirun --bind-to-none solve the problem.
>>> 
>>> Thanks a lot,
>>> 
>>> Regards,
>>> 
>>> Rémi
>>> 
>>> 
>>> 
>>> 
>>> 
>>>> Le 22 juin 2016 à 09:34, Gilles Gouaillardet <gil...@rist.or.jp <>> a 
>>>> écrit :
>>>> 
>>>> Remi,
>>>> 
>>>> 
>>>> 
>>>> in the same environment, can you
>>>> 
>>>> mpirun -np 1 grep Cpus_allowed_list /proc/self/status
>>>> 
>>>> 
>>>> it is likely Open MPI allows only one core, and in this case, i suspect 
>>>> MKL refuses to do some time sharing and hence transparently reduce the 
>>>> number of threads to 1.
>>>> /* unless it *does* time sharing, and you observed 4 threads with the 
>>>> performance of one */
>>>> 
>>>> 
>>>> mpirun --bind-to none ...
>>>> 
>>>> will tell Open MPI *not* to bind on one core, and that should help a bit.
>>>> 
>>>> note this is suboptimal, you should really ask mpirun to allocate 4 cores 
>>>> per task, but i cannot remember the correct command line for that
>>>> 
>>>> Cheers,
>>>> 
>>>> Gilles
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On 6/22/2016 4:17 PM, remi marchal wrote:
>>>>> Dear openmpi users,
>>>>> 
>>>>> Today, I faced a strange problem.
>>>>> 
>>>>> I am compiling a quantum chemistry software (CASTEP-16) using intel16, 
>>>>> mkl threaded libraries and openmpi-18.1.
>>>>> 
>>>>> The compilation works fine.
>>>>> 
>>>>> When I ask for MKL_NUM_THREAD=4 and call the program in serial mode 
>>>>> (without mpirun), it works perfectly and use 4 threads.
>>>>> 
>>>>> However, when I start the program with mpirun, even with 1 mpi process, 
>>>>> the program ran but only with 1 thread.
>>>>> 
>>>>> I never add such kind of trouble.
>>>>> 
>>>>> Does anyone have an explanation.
>>>>> 
>>>>> Regards,
>>>>> 
>>>>> Rémi
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org <>
>>>>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
>>>>> <https://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>>> Link to this post: 
>>>>> http://www.open-mpi.org/community/lists/users/2016/06/29495.php 
>>>>> <http://www.open-mpi.org/community/lists/users/2016/06/29495.php>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org <>
>>>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
>>>> <https://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>> Link to this post:  
>>>> <http://www.open-mpi.org/community/lists/users/2016/06/29497.php>http://www.open-mpi.org/community/lists/users/2016/06/29497.php
>>>>  <http://www.open-mpi.org/community/lists/users/2016/06/29497.php>
>>> 
>>> 
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org <>
>>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
>>> <https://www.open-mpi.org/mailman/listinfo.cgi/users>
>>> Link to this post: 
>>> http://www.open-mpi.org/community/lists/users/2016/06/29498.php 
>>> <http://www.open-mpi.org/community/lists/users/2016/06/29498.php>
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <>
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
>> <https://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/06/29499.php 
>> <http://www.open-mpi.org/community/lists/users/2016/06/29499.php>
>> 
>> 
>> 
>> -- 
>> Jeff Hammond
>> jeff.scie...@gmail.com <>
>> http://jeffhammond.github.io/ <http://jeffhammond.github.io/>
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <>
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users 
>> <https://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/06/29500.php 
>> <http://www.open-mpi.org/community/lists/users/2016/06/29500.php>
> 
> 
> -- 
> Jeff Hammond
> jeff.scie...@gmail.com <>
> http://jeffhammond.github.io/ <http://jeffhammond.github.io/>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/06/29505.php 
> <http://www.open-mpi.org/community/lists/users/2016/06/29505.php>

Reply via email to