Re: [OMPI users] Test Program works on 1, 2 or 3 nodes. Hangs on 4 or more nodes.

2010-09-20 Thread ETHAN DENEAULT
David, I did try that after I sent the original mail, but the -np 4 flag doesn't fix the problem, the program still hangs. I've also double checked the iptables for the image and for the master node, and all ports are set to accept. Cheers, Ethan -- Dr. Ethan Deneault Assistant Professor of

Re: [OMPI users] Test Program works on 1, 2 or 3 nodes. Hangs on 4 or more nodes.

2010-09-20 Thread David Zhang
I don't know if this will help, but try mpirun --machinefile testfile -np 4 ./test.out for running 4 processes On Mon, Sep 20, 2010 at 3:00 PM, Ethan Deneault wrote: > All, > > I am running Scientific Linux 5.5, with OpenMPI 1.4 installed into the > /usr/lib/openmpi/1.4-gcc/ directory. I know th

[OMPI users] Thread as MPI process

2010-09-20 Thread Mikael Lavoie
Hi, I wanna know if it exist a implementation that permit to run a single host process on the master of the cluster, that will then spawn 1 process per -np X defined thread at the host specified in the host list. The host will then act as a syncronized sender/collecter of the work done. It would

[OMPI users] Test Program works on 1, 2 or 3 nodes. Hangs on 4 or more nodes.

2010-09-20 Thread Ethan Deneault
All, I am running Scientific Linux 5.5, with OpenMPI 1.4 installed into the /usr/lib/openmpi/1.4-gcc/ directory. I know this is typically /opt/openmpi, but Red Hat does things differently. I have my PATH and LD_LIBRARY_PATH set correctly; because the test program does compile and run. The clu

[OMPI users] OpenMPI on the ARM processor architecture?

2010-09-20 Thread Ken Mighell
Dear OpenMPI, Has there been any consideration of porting OpenMPI to the ARM processor? Plans are afoot to launch 7 ARM processors on a "Stage Coach" card in a 3U CubeSat. NASA's NMP (New Millennium Program) ST-8 (Space Technology 8) DM (Dependable Multiprocessor) uses OpenMPI as the foun

[OMPI users] Continued functionality across a SLES10 to SLES11 upgrade ...

2010-09-20 Thread Richard Walsh
All, I was not expecting things to work, and find that codes compiled using OpenMPI 1.4.1 commands under SLES 10.2 produce the following message when run under SLES11: mca: base: component_find: unable to open /share/apps/openmpi-intel/1.4.1/lib/openmpi/mca_btl_openib: perhaps a missing symbol