I see - I think. The locality of every process is stored on the ompi_proc_t for 
that process in the proc_flags field. You can find the definition of the values 
in opal/mca/hwloc/hwloc.h.



On Feb 10, 2013, at 1:57 PM, Brice Goglin <brice.gog...@inria.fr> wrote:

> Inside the OMPI implementation. He wants to use locality information for
> some sort of collective algorithm tuning (or something like that). He
> needs the locality of all local ranks as far as I understood. I don't
> know if that's ORTE or not, but that's inside some OMPI component at least.
> 
> Brice
> 
> 
> 
> Le 10/02/2013 22:47, Ralph Castain a écrit :
>> I honestly have no idea what you mean. Are you talking about inside an MPI 
>> application? Do you mean from inside the MPI layer? Inside ORTE? Inside an 
>> ORTE daemon?
>> 
>> 
>> On Feb 10, 2013, at 1:41 PM, Brice Goglin <brice.gog...@inria.fr> wrote:
>> 
>>> What about *inside* OMPI?
>>> 
>>> Brice
>>> 
>>> 
>>> 
>>> Le 10/02/2013 21:16, Ralph Castain a écrit :
>>>> There is no MPI standard call to get the binding. He could try to use the 
>>>> MPI extensions, depending on which version of OMPI he's using. It is in 
>>>> v1.6 and above.
>>>> 
>>>> See "man OMPI_Affinity_str" for details (assuming you included the OMPI 
>>>> man pages in your MANPATH), or look at it online at
>>>> 
>>>> http://www.open-mpi.org/doc/v1.6/man3/OMPI_Affinity_str.3.php
>>>> 
>>>> Remember, you have to configure with --enable-mpi-ext in order to enable 
>>>> the extensions.
>>>> 
>>>> 
>>>> On Feb 10, 2013, at 12:08 AM, Brice Goglin <brice.gog...@inria.fr> wrote:
>>>> 
>>>>> I've been talking with Kranthi offline, he wants to use locality info
>>>>> inside OMPI. He needs the binding info from *inside* MPI. From 10
>>>>> thousands feet, it looks like communicator->rank[X]->locality_info as a
>>>>> hwloc object or as a hwloc bitmap.
>>>>> 
>>>>> Brice
>>>>> 
>>>>> 
>>>>> 
>>>>> Le 10/02/2013 06:07, Ralph Castain a écrit :
>>>>>> Add --report-bindings to the mpirun cmd line
>>>>>> 
>>>>>> Remember, we do not bind processes by default, so you will need to 
>>>>>> include something about the binding to use (by core, by socket, etc.) on 
>>>>>> the cmd line
>>>>>> 
>>>>>> See "mpirun -h" for the options
>>>>>> 
>>>>>> On Feb 9, 2013, at 8:46 PM, Kranthi Kumar <kranthi...@gmail.com> wrote:
>>>>>> 
>>>>>>> Hello Sir,
>>>>>>> 
>>>>>>> I need a way to find out where each rank runs from inside the 
>>>>>>> implementation? 
>>>>>>> How do I  know the binding of each rank in an MPI application? 
>>>>>>> 
>>>>>>> Thank You
>>>>>>> -- 
>>>>>>> Kranthi _______________________________________________
>>>>>>> users mailing list
>>>>>>> us...@open-mpi.org
>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> us...@open-mpi.org
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to