The OMP_PROC_BIND=CLOSE approach works, except it will bind threads to 1
hardware thread only (when HT is present). For example, doing the following
to run 2 procs per node each with 4 threads, the thread affinity info
(queried through sched_getaffinity()) comes out as below.

mpirun -np 2 --map-by ppr:2:node:PE=4 ./a.out

Rank 0 Thread 0, tid 12173, affinity 0
Rank 0 Thread 1, tid 12184, affinity 1
Rank 0 Thread 2, tid 12185, affinity 2
Rank 0 Thread 3, tid 12186, affinity 3

Rank 1 Thread 0, tid 12174, affinity 4
Rank 1 Thread 1, tid 12181, affinity 5
Rank 1 Thread 2, tid 12182, affinity 6
Rank 1 Thread 3, tid 12183, affinity 7

If threads got bound to the full core these should have looked like "Rank 0
Thread 0, tid 12173, affinity *0,24*"

Reading through OpenMP, it seems there's no way to bind 1 thread to
multiple places using the environment setting provided. Intel's version
seem to support that, though.



On Wed, Jun 29, 2016 at 1:20 AM, Saliya Ekanayake <esal...@gmail.com> wrote:

> Thank you, Ralph and Gilles.
>
> I didn't know about the OMPI_COMM_WORLD_LOCAL_RANK variable. Essentially,
> this means I should be able to wrap my application call in a shell script
> and have mpirun invoke that. Then within the script I can query this
> variable and set correct OMP env variable, correct?
>
> Gilles, yes, the MPI command correctly bind processes to x number of
> cores. I think it should be OMP_PROC_BIND=CLOSE according to
> https://gcc.gnu.org/onlinedocs/libgomp/OMP_005fPROC_005fBIND.html.
>
> I'll check these two options.
>
> Thanks,
> Saliya
>
> On Tue, Jun 28, 2016 at 11:59 PM, Gilles Gouaillardet <gil...@rist.or.jp>
> wrote:
>
>> Can't you simply
>>
>> export OMP_PROC_BIND=1
>>
>>
>> assuming mpirun has the correct command line (e.g. correctly bind tasks
>> on x cores so the x OpenMP threads will be individually bound to each
>> core), each is bound to disjoint cpusets, so i guess GOMP will bind OpenMP
>> threads within the given cpuset.
>>
>> /* at least this is what the Intel runtime is doing */
>>
>>
>> Cheers,
>>
>>
>> Gilles
>>
>> On 6/29/2016 12:47 PM, Ralph Castain wrote:
>>
>> Why don’t you have your application look at
>> the OMPI_COMM_WORLD_LOCAL_RANK envar, and then use that to calculate the
>> offset location for your threads (i.e., local rank 0 is on socket 0, local
>> rank 1 is on socket 1, etc.). You can then putenv the correct value of the
>> GOMP envar
>>
>>
>> On Jun 28, 2016, at 8:40 PM, Saliya Ekanayake <esal...@gmail.com> wrote:
>>
>> Hi,
>>
>> I am trying to do something like below with OpenMPI and OpenMP (threads).
>>
>> <image.png>
>>
>> I was trying to use the explicit thread affinity with GOMP_CPU_AFFINITY
>> environment variable as described here (
>> https://gcc.gnu.org/onlinedocs/libgomp/GOMP_005fCPU_005fAFFINITY.html).
>>
>> However, both P0 and P1 processes will read the same GOMP_CPU_AFFINITY
>> and will place threads on the same set of cores.
>>
>> Is there a way to overcome this and pass process specific affinity scheme
>> to OpenMP when running with OpenMPI? For example, can I say T0 of P0 should
>> be in Core 0, but T0 of P1 should be Core 4?
>>
>> P.S. I can manually achieve this within the program using
>> *sched_setaffinity()*, but that's not portable.
>>
>> Thank you,
>> Saliya
>>
>> --
>> Saliya Ekanayake
>> Ph.D. Candidate | Research Assistant
>> School of Informatics and Computing | Digital Science Center
>> Indiana University, Bloomington
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/06/29556.php
>>
>>
>>
>>
>> _______________________________________________
>> users mailing listus...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/06/29557.php
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/06/29558.php
>>
>
>
>
> --
> Saliya Ekanayake
> Ph.D. Candidate | Research Assistant
> School of Informatics and Computing | Digital Science Center
> Indiana University, Bloomington
>
>


-- 
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington

Reply via email to