Very useful!
Thank you Ralph.

Albert

On Fri 23 May 2014 15:26:58 BST, Ralph Castain wrote:

On May 23, 2014, at 7:14 AM, Albert Solernou <albert.soler...@oerc.ox.ac.uk> 
wrote:

Well,
the problem is that I don't know how to do any of these things

Ah! You might want to read this:

http://www.open-mpi.org/faq/?category=tuning#mca-params

, so more explicitly:
- does OpenMPI accept any environment variable that prevents binding like 
MV2_ENABLE_AFFINITY does on MVAPICH2?

OMPI_MCA_hwloc_base_binding_policy=none

- what is the default mca param file? Is it a runtime file? or a configuration 
file? how do I modify it to achieve that?

When you install OMPI, an "etc" directory gets created under the prefix location. In that 
directory is a file "openmpi-mca-params.conf". This is your default MCA param file that 
mpirun (and every OMPI process) reads on startup. You can put any params in there that you want. In 
this case, you'd add a line:

hwloc_base_binding_policy = none

HTH
Ralph


Thanks,
Albert

On 23/05/14 15:02, Ralph Castain wrote:

On May 23, 2014, at 6:58 AM, Albert Solernou <albert.soler...@oerc.ox.ac.uk> 
wrote:

Hi,
thanks a lot for your quick answers, and I see my error, it is "--bind-to none" instead 
of "--bind-to-none".

However, I need to be able to run "mpirun -np 2" without any binding argument and get a 
"--bind-to none" behaviour. I don't know if I can export an environment variable to do 
that, and I don't mind to re-compile with some flag I missed or to alter the code.

Any suggestion?

Obviously, you could set an variable in your environment, so I'm assuming there 
is some other limitation in play here? If so, you could always put the MCA 
param in the default mca param file - we'll pick it up from there.


Albert

On 23/05/14 14:32, Ralph Castain wrote:
Note that the lama mapper described in those slides may not work as it hasn't 
been maintained in a while. However, you can use the map-by and bind-to options 
to do the same things.

If you want to disable binding, you can do so by adding "--bind-to none" to the cmd line, 
or via the MCA param "hwloc_base_binding_policy=none"

If you want to bind your process to multiple cores (say one per thread), then you can use "--map-by 
core:pe=N". Many hybrid users prefer to bind to a socket, with one process for each socket - that can be 
done with "--map-by socket --bind-to socket". This keeps all the threads in the same NUMA domain. 
If you aren't sure that each socket is its own NUMA domain, you can alternatively "--map-by numa 
--bind-to numa" - that'll keep you in your own NUMA domain regardless of whether that's at the socket 
level or elsewhere.

I'm working on adding a full description of the mapping/binding system to our 
web site.

HTH
Ralph

On May 23, 2014, at 6:22 AM, Brock Palen <bro...@umich.edu> wrote:

Albert,

Actually doing affinity correctly for hybrid got easier in OpenMPI 1.7 and 
newer,  In the past you had to make a lot of assumptions, stride by node etc,

Now you can define a layout:

http://blogs.cisco.com/performance/eurompi13-cisco-slides-open-mpi-process-affinity-user-interface/

Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
bro...@umich.edu
(734)936-1985



On May 23, 2014, at 9:19 AM, Albert Solernou <albert.soler...@oerc.ox.ac.uk> 
wrote:

Hi,
after compiling and installing OpenMPI 1.8.1, I find that OpenMPI is pinning 
processes onto cores. Although this may be
desirable on some cases, it is a complete disaster when runnning hybrid 
OpenMP-MPI applications. Therefore, I want to disable this behaviour, but don't 
know how.

I configured OpenMPI with:
./configure \
        --prefix=$OPENMPI_HOME \
        --with-psm \
        --with-tm=/system/software/arcus/lib/PBS/11.3.0.121723 \
        --enable-mpirun-prefix-by-default \
        --enable-mpi-thread-multiple

and:
ompi_info | grep paffinity
does not report anything. However,
mpirun -np 2 --report-bindings ./wrk
reports bindings:
[login3:04574] MCW rank 1 bound to socket 0[core 1[hwt 0-1]]: 
[../BB/../../../../../..][../../../../../../../..]
[login3:04574] MCW rank 0 bound to socket 0[core 0[hwt 0-1]]: 
[BB/../../../../../../..][../../../../../../../..]
but they cannot be disabled as:
mpirun -np 2 --bind-to-none ./wrk
returns:
mpirun: Error: unknown option "--bind-to-none"

Any idea on what went wrong?

Best,
Albert

--
---------------------------------
Dr. Albert Solernou
Research Associate
Oxford Supercomputing Centre,
University of Oxford
Tel: +44 (0)1865 610631
---------------------------------
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
---------------------------------
  Dr. Albert Solernou
  Research Associate
  Oxford Supercomputing Centre,
  University of Oxford
  Tel: +44 (0)1865 610631
---------------------------------
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
---------------------------------
  Dr. Albert Solernou
  Research Associate
  Oxford Supercomputing Centre,
  University of Oxford
  Tel: +44 (0)1865 610631
---------------------------------
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

--
---------------------------------
 Dr. Albert Solernou
 Research Associate
 Oxford Supercomputing Centre,
 University of Oxford
 Tel: +44 (0)1865 610631
---------------------------------

Reply via email to