This feature is now available on the trunk - syntax is "-pernode".

In the absence of the number of procs:

"bynode" will launch on all *slots*, with the processes mapped on a bynode
basis

"byslot" will launch on all *slots*, with procs mapped on a byslot basis.

"pernode" will launch one proc/node across all nodes.



On 10/7/06 6:57 AM, "Jeff Squyres" <jsquy...@cisco.com> wrote:

> Open MPI does not currently have an option to effect this kind of behavior.
> It basically assumes that you are going to ask for the right number of slots
> for your job.
> 
> I'll file a ticket for a future enhancement to add this behavior.
> 
> 
> On 10/6/06 11:25 AM, "Maestas, Christopher Daniel" <cdma...@sandia.gov>
> wrote:
> 
>> Hello,
>> 
>> I was wondering if openmpi had a -pernode like behavior similar to osc
>> mpiexec ....
>> mpiexec -pernode mpi_hello
>> Would launch N mpi processes on N nodes ... No more no less.
>> 
>> Openmpi already will try and run N*2 nodes if you don't specify -np
>> mpirun -np mpi_hello
>> Launches N*2 mpi processes on N nodes (when using torque and 2ppn
>> specified in your nodes file).  This is good.
>> 
>> I tried:
>> $ mpirun -nooversubscribe mpi_hello
>> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
>> been used in file rmaps_rr.c at line 116
>> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
>> been used in file rmaps_rr.c at line 392
>> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
>> been used in file rmgr_urm.c at line 428
>> [dn172:09406] mpirun: spawn failed with errno=-126
>> 
>> Here's our env:
>> $ env | grep ^OM
>> OMPI_MCA_btl_mvapi_ib_timeout=18
>> OMPI_MCA_btl_mvapi_use_eager_rdma=0
>> OMPI_MCA_rmaps_base_schedule_policy=node
>> OMPI_MCA_btl_mvapi_ib_retry_count=15
>> OMPI_MCA_oob_tcp_include=eth0
>> OMPI_MCA_mpi_keep_hostnames=1
>> 
>> This helps us to simply launch scripts to be generic enough to run on
>> 1ppn and 2ppn studies pretty easily.
>> 
>> ----
>> echo "Running hello world"
>> mpiexec -pernode mpi_hello
>> echo "Running hello world 2ppn"
>> mpiexec mpi_hello
>> ---
>> 
>> Thanks,
>> -cdm
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 


Reply via email to