On Jul 27, 2010, at 11:18 PM, Yves Caniou wrote:

> Le Wednesday 28 July 2010 06:03:21 Nysal Jan, vous avez écrit :
>> OMPI_COMM_WORLD_RANK can be used to get the MPI rank. For other environment
>> variables -
>> http://www.open-mpi.org/faq/?category=running#mpi-environmental-variables
> 
> Are processes affected to nodes sequentially, so that I can get the NODE 
> number from $OMPI_COMM_WORLD_RANK modulo the number of proc per node?

By default, yes. However, you can select alternative mapping methods.

Or...you could just use the mpirun cmd line option to report the binding of 
each process as it is started :-)

Do "mpirun -h" to see all the options. The one you want is --report-bindings

> 
>> For processor affinity see this FAQ entry -
>> http://www.open-mpi.org/faq/?category=all#using-paffinity
> 
> Thank you, but that's where I had the information that I put in my previous 
> mail, so it doesn't answer to my question.

Memory affinity is taken care of under-the-covers when paffinity is active. No 
other options are required.


> 
> .Yves.
> 
>> --Nysal
>> 
>> On Wed, Jul 28, 2010 at 9:04 AM, Yves Caniou <yves.can...@ens-lyon.fr>wrote:
>>> Hi,
>>> 
>>> I have some performance issue on a parallel machine composed of nodes of
>>> 16 procs each. The application is launched on multiple of 16 procs for
>>> given numbers of nodes.
>>> I was told by people using MX MPI with this machine to attach a script to
>>> mpiexec, which 'numactl' things, in order to make the execution
>>> performance stable.
>>> 
>>> Looking on the faq (the oldest one is for OpenMPI v1.3?), I saw that
>>> maybe the
>>> solution would be for me to use the --mca mpi_paffinity_alone 1
>>> 
>>> Is that correct? -- BTW, I have both memory and processor affinity:
>>>> ompi_info | grep affinity
>>> 
>>>          MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
>>>          MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
>>>          MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2)
>>> Does it handle memory too, or do I have to use another option like
>>> --mca mpi_maffinity 1?
>>> 
>>> Still, I would like to test the numactl solution. Does OpenMPI provide an
>>> equivalent to $MXMPI_ID which gives at least gives the NODE on which a
>>> process is launched by OpenMPI, so that I can adapt the script that was
>>> given
>>> to me?
>>> 
>>> Tkx.
>>> 
>>> .Yves.
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> -- 
> Yves Caniou
> Associate Professor at Université Lyon 1,
> Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
> Délégation CNRS in Japan French Laboratory of Informatics (JFLI),
>  * in Information Technology Center, The University of Tokyo,
>    2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8658, Japan
>    tel: +81-3-5841-0540
>  * in National Institute of Informatics
>    2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
>    tel: +81-3-4212-2412 
> http://graal.ens-lyon.fr/~ycaniou/
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to