[OMPI users] looking for serial implementation

2009-04-04 Thread John Wohlbier
I'm sure that I'm not the first person who wants their MPI program to
compile when MPI is not available. It seems like the simplest solution to
this is to have a header file (with implementation, or header file and .c
file) that implements all of the functions for the case when MPI isn't
available on a system. I found something called MPI_STUBS via google, but
some of the links are broken, and I've searched the archives. This seems
like the kind of thing that tons of people have done for themselves, and I'm
hoping to avoid doing it for myself.

Anybody have one they'd like to share?

jgw


-- 
John G. Wohlbier


Re: [OMPI users] looking for serial implementation

2009-04-04 Thread doriankrause

John Wohlbier wrote:

I'm sure that I'm not the first person who wants their MPI program to
compile when MPI is not available. It seems like the simplest solution to
this is to have a header file (with implementation, or header file and .c
file) that implements all of the functions for the case when MPI isn't
available on a system. I found something called MPI_STUBS via google, but
some of the links are broken, and I've searched the archives. This seems
like the kind of thing that tons of people have done for themselves, and I'm
hoping to avoid doing it for myself.

Anybody have one they'd like to share?

jgw


  

I know this one (haven't tried it myself):

http://wissrech.ins.uni-bonn.de/research/projects/nullmpi/

Regards,
Dorian





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] Problem with running openMPI program

2009-04-04 Thread Ankush Kaul
I followed the steps given here to setup up openMPI cluster :
http://www.ps3cluster.umassd.edu/step3mpi.html
My cluster consists of two nodes, master(192.168.67.18) and
salve(192.168.45.65), connected directly through a cross cable.

After setting up the cluster n configuring the master node, i mounted  /tmp
folder of master node on the slave node(i had some problems with nfs at
first but i worked my way out of it).

Then i copied the 'pi.c' program in the /tmp folder
and successfully complied it, giving me a binary file 'pi'.

Now when i try to run the binary file using the following command

#mpirun –np 2 ./Pi
*
*
root@192.168.45.65's password:


after entering the password it gives the following error:

*bash: orted: command not found*
*[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file
base/pls_base_orted_cmds.c at line 275*
*[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file
pls_rsh_module.c at line 1166*
*[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file errmgr_hnp.c
at line 90*
*[ccomp.cluster:18963] ERROR: A daemon on node 192.168.45.65 failed to start
as expected.*
*[ccomp.cluster:18963] ERROR: There may be more information available from*
*[ccomp.cluster:18963] ERROR: the remote shell (see above).*
*[ccomp.cluster:18963] ERROR: The daemon exited unexpectedly with status
127.*
*[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file
base/pls_base_orted_cmds.c at line 188*
*[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file
pls_rsh_module.c at line 1198*
*--*
*mpirun was unable to cleanly terminate the daemons for this job. Returned
value Timeout instead of ORTE_SUCCESS.*
*--*
*
*
I am totally lost now, as this is the first time i am working on a cluster
project, and need some help

Thank you
Ankush


[OMPI users] Problem with insatlling OpenMPI on compute node

2009-04-04 Thread Ankush Kaul
I followed the steps given here to setup up openMPI cluster :
http://www.ps3cluster.umassd.edu/step3mpi.html
My cluster consists of two nodes, master(running on fedora 10) and
compute(CentOS 5.2) node, connected directly through a cross cable.

I installed openmpi on both, there were many folders n files on de master
node but there were none on de compute node. i again ran yum install openmpi
successfully on de compute node but still there were no openmpi folders or
files installed.

Why is this happening? is it because of the OS?


Re: [OMPI users] Problem with running openMPI program

2009-04-04 Thread Jeff Squyres

It might be best to:

1. Setup a non-root user to run MPI applications
2. Setup SSH keys between the hosts for this non-root user so that you  
can "ssh  uptime" and not be prompted for a password/ 
passphrase


This should help.


On Apr 4, 2009, at 5:51 AM, Ankush Kaul wrote:


I followed the steps given here to setup up openMPI cluster : 
http://www.ps3cluster.umassd.edu/step3mpi.html

My cluster consists of two nodes, master(192.168.67.18) and  
salve(192.168.45.65), connected directly through a cross cable.


After setting up the cluster n configuring the master node, i  
mounted  /tmp folder of master node on the slave node(i had some  
problems with nfs at first but i worked my way out of it).


Then i copied the 'pi.c' program in the /tmp folder and successfully  
complied it, giving me a binary file 'pi'.


Now when i try to run the binary file using the following command

#mpirun –np 2 ./Pi

root@192.168.45.65's password:


after entering the password it gives the following error:

bash: orted: command not found
[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file base/ 
pls_base_orted_cmds.c at line 275
[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file  
pls_rsh_module.c at line 1166
[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file  
errmgr_hnp.c at line 90
[ccomp.cluster:18963] ERROR: A daemon on node 192.168.45.65 failed  
to start as expected.
[ccomp.cluster:18963] ERROR: There may be more information available  
from

[ccomp.cluster:18963] ERROR: the remote shell (see above).
[ccomp.cluster:18963] ERROR: The daemon exited unexpectedly with  
status 127.
[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file base/ 
pls_base_orted_cmds.c at line 188
[ccomp.cluster:18963] [0,0,0] ORTE_ERROR_LOG: Timeout in file  
pls_rsh_module.c at line 1198

--
mpirun was unable to cleanly terminate the daemons for this job.  
Returned value Timeout instead of ORTE_SUCCESS.

--

I am totally lost now, as this is the first time i am working on a  
cluster project, and need some help


Thank you
Ankush

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems




Re: [OMPI users] Problem with insatlling OpenMPI on compute node

2009-04-04 Thread Jeff Squyres
I'm not too familiar with that tutorial nor your particular method of  
installation.  In general, Open MPI needs a bunch of files to be  
available on all nodes (e.g., see if you can find "mpirun" on all  
nodes).  See these FAQ entries:


http://www.open-mpi.org/faq/?category=running#run-prereqs
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
http://www.open-mpi.org/faq/?category=running#mpirun-prefix


On Apr 4, 2009, at 7:23 AM, Ankush Kaul wrote:


I followed the steps given here to setup up openMPI cluster : 
http://www.ps3cluster.umassd.edu/step3mpi.html

My cluster consists of two nodes, master(running on fedora 10) and  
compute(CentOS 5.2) node, connected directly through a cross cable.


I installed openmpi on both, there were many folders n files on de  
master node but there were none on de compute node. i again ran yum  
install openmpi successfully on de compute node but still there were  
no openmpi folders or files installed.


Why is this happening? is it because of the OS?
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems