Am 30.10.2008 um 14:46 schrieb Brock Palen:

Any thoughts on this?

We are looking writing a script that parses $PBS_NODEFILE to create a machinefile and using -machinefile

When we do that though we have to disable tm to avoid an error (- mca pls ^tm) this is far from preferable.

What about redefining the variable $PBS_NODEFILE pointing to an adjusted copy of the original file? With this, you could even use the TM startup of the nodes, as mpirun would use the adjusted file AFAICS.

When you know, that you request always 2 cores per node, the startup of any threads is up to you on your own. As you got two cores, it's safe.

-- Reuti


Any ideas to tell mpirun to only launch on half the cpus given to it by PBS, but each cpu must have adjacent to it another cpu in the same node?

Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985



On Oct 25, 2008, at 5:36 PM, Brock Palen wrote:

We have a user with a code that uses threaded solvers inside each MPI rank. They would like to run two threads per process.

The question is how to launch this? The default -byslot puts all the processes on the first sets of cpus not leaving any cpus for the second thread for each process. And half the cpus are wasted.

The -bynode option works in theory, if all our nodes had the same number of core (they do not).

So right now the user did:

#PBS -l nodes=22:ppn=2
export OMP_NUM_THREADS=2
mpirun -np 22 app

Which made me aware of the problem.

How can I basically tell OMPI that a 'slot' is two cores on the same machine? This needs to work inside out torque based queueing system.

Sorry If I was not clear about my goal.


Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to