You have to tell mpiexec what nodes you want to use for your application. This is typically done either on the command line or in a file. For now, you could just do this:
mpiexec -host node1,node2,node3 -np N ./my_app where node1,node2,node3,... are the names or IP addresses of the nodes you want to run on, and N is the number of total processes you want executed. On Apr 19, 2011, at 8:47 AM, mohd naseem wrote: > > sorry sir, > > i am unable to understand what u are saying ? becoz i am a new user of mpi. > > please tell me details about it and command also > > thanks > > > > On Tue, Apr 19, 2011 at 2:32 PM, Reuti <re...@staff.uni-marburg.de> wrote: > Good, then please supply a hostfile with the names of the machines you want > to run for a particular run and give it as option to `mpiexec`. See options > -np and -machinefile. > > -- Reuti > > > Am 19.04.2011 um 06:38 schrieb mohd naseem: > > > sir > > when i give mpiexec hostname command. > > it only give one hostname. rest are not shown. > > > > > > > > > > > > > > On Mon, Apr 18, 2011 at 7:46 PM, Reuti <re...@staff.uni-marburg.de> wrote: > > Am 18.04.2011 um 15:40 schrieb chenjie gu: > > > > > I am a green hand on Openmpi, I have the following Openmpi structure, > > > however it has problem when running across multiple nodes. > > > I am trying to build a Bewolf Cluster between 6 nodes of our serve (HP > > > Proliant G460 G7), I have installed the Openmpi on one node (assuming at > > > /mirror), > > > ./configure --prefix=/mirror/openmpi CC=icc CXX=icpc F77=ifort FC=ifort > > > make all install > > > > > > using NFS, the directory of /mirror was successfully exported to the rest > > > of 5 nodes. Now as I test the Openmpi, it runs very well on a single node, > > > however it hangs across multiple nodes. > > > > > > Now one possible reason as I know is that Openmpi uses TCP to exchange > > > data between different nodes, so I am worried about > > > whether there are firewalls between each nodes, which can be factory > > > integrated at somewhere(switch/NIC). Could anyone give me some > > > information on this point? > > > > It's not only about MPI communcation. Before you need some means to allow > > the startup of the local orte daemons on each machine by passphraseless > > ssh-keys or better hostbased authentication > > http://arc.liv.ac.uk/SGE/howto/hostbased-ssh.html , or enable `rsh` on the > > machines and tell Open MPI to use it. Is: > > > > mpiexec hostname > > > > giving you a list of the involved machines? > > > > -- Reuti > > > > > > > Thanks a lot, > > > Regards, > > > ArchyGU > > > Nanyang Technological University > > > _______________________________________________ > > > users mailing list > > > us...@open-mpi.org > > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > > > _______________________________________________ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > _______________________________________________ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users