I am using openmpi-1.6.3. What do you meant with "We stopped supporting bproc 
after the 1.2 series, though you could always launch via ssh." ?
Best Regards,Shi Wei.
From: r...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date: Thu, 13 Dec 2012 06:37:56 -0800
To: us...@open-mpi.org
Subject: Re: [OMPI users] Cannot run MPI job across nodes using OpenMPI in      
F17

What version of OMPI are you running? We stopped supporting bproc after the 1.2 
series, though you could always launch via ssh.
On Dec 12, 2012, at 10:25 PM, Ng Shi Wei <nsw_1...@hotmail.com> wrote:Dear all,
I am new in Linux and clustering. I am setting up a Beowulf Cluster using 
several PCs according to this guide 
http://www.tldp.org/HOWTO/html_single/Beowulf-HOWTO/.

I have setup and configure accordingly except for NFS part. Because I am not 
requiring it for my application. I have set my ssh to login each other without 
password. I started with 2 nodes 1st. I can compile and run in my headnode 
using openmpi. But when I try to run my MPI application across nodes, there is 
nothing displaying. It just like hanging there.

Headnode: master
client: slave4

The command I used to mpirun across nodes is as below:
Code:mpirun -np 4 --host slave4 outputSince I not using NFS, so I installed 
OpenMPI in every nodes with same locations. 

I wondering I missed out any configurations or not.

Hope someone can help me out of this problem.

Thanks in advance.
Best Regards,Shi Wei_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users                              
          

Reply via email to