OMPI_COMM_WORLD_RANK can be used to get the MPI rank. For other environment
variables -
http://www.open-mpi.org/faq/?category=running#mpi-environmental-variables
For processor affinity see this FAQ entry -
http://www.open-mpi.org/faq/?category=all#using-paffinity

--Nysal

On Wed, Jul 28, 2010 at 9:04 AM, Yves Caniou <yves.can...@ens-lyon.fr>wrote:

> Hi,
>
> I have some performance issue on a parallel machine composed of nodes of 16
> procs each. The application is launched on multiple of 16 procs for given
> numbers of nodes.
> I was told by people using MX MPI with this machine to attach a script to
> mpiexec, which 'numactl' things, in order to make the execution performance
> stable.
>
> Looking on the faq (the oldest one is for OpenMPI v1.3?), I saw that maybe
> the
> solution would be for me to use the --mca mpi_paffinity_alone 1
> Is that correct? -- BTW, I have both memory and processor affinity:
> >ompi_info | grep affinity
>           MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
>           MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
>           MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2)
> Does it handle memory too, or do I have to use another option like
> --mca mpi_maffinity 1?
>
> Still, I would like to test the numactl solution. Does OpenMPI provide an
> equivalent to $MXMPI_ID which gives at least gives the NODE on which a
> process is launched by OpenMPI, so that I can adapt the script that was
> given
> to me?
>
> Tkx.
>
> .Yves.
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to