Albert, Actually doing affinity correctly for hybrid got easier in OpenMPI 1.7 and newer, In the past you had to make a lot of assumptions, stride by node etc,
Now you can define a layout: http://blogs.cisco.com/performance/eurompi13-cisco-slides-open-mpi-process-affinity-user-interface/ Brock Palen www.umich.edu/~brockp CAEN Advanced Computing XSEDE Campus Champion bro...@umich.edu (734)936-1985 On May 23, 2014, at 9:19 AM, Albert Solernou <albert.soler...@oerc.ox.ac.uk> wrote: > Hi, > after compiling and installing OpenMPI 1.8.1, I find that OpenMPI is pinning > processes onto cores. Although this may be > desirable on some cases, it is a complete disaster when runnning hybrid > OpenMP-MPI applications. Therefore, I want to disable this behaviour, but > don't know how. > > I configured OpenMPI with: > ./configure \ > --prefix=$OPENMPI_HOME \ > --with-psm \ > --with-tm=/system/software/arcus/lib/PBS/11.3.0.121723 \ > --enable-mpirun-prefix-by-default \ > --enable-mpi-thread-multiple > > and: > ompi_info | grep paffinity > does not report anything. However, > mpirun -np 2 --report-bindings ./wrk > reports bindings: > [login3:04574] MCW rank 1 bound to socket 0[core 1[hwt 0-1]]: > [../BB/../../../../../..][../../../../../../../..] > [login3:04574] MCW rank 0 bound to socket 0[core 0[hwt 0-1]]: > [BB/../../../../../../..][../../../../../../../..] > but they cannot be disabled as: > mpirun -np 2 --bind-to-none ./wrk > returns: > mpirun: Error: unknown option "--bind-to-none" > > Any idea on what went wrong? > > Best, > Albert > > -- > --------------------------------- > Dr. Albert Solernou > Research Associate > Oxford Supercomputing Centre, > University of Oxford > Tel: +44 (0)1865 610631 > --------------------------------- > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
signature.asc
Description: Message signed with OpenPGP using GPGMail