David,
I did try that after I sent the original mail, but the -np 4 flag doesn't fix
the problem, the program still hangs. I've also double checked the iptables for
the image and for the master node, and all ports are set to accept.
Cheers,
Ethan
--
Dr. Ethan Deneault
Assistant Professor of
I don't know if this will help, but try
mpirun --machinefile testfile -np 4 ./test.out
for running 4 processes
On Mon, Sep 20, 2010 at 3:00 PM, Ethan Deneault wrote:
> All,
>
> I am running Scientific Linux 5.5, with OpenMPI 1.4 installed into the
> /usr/lib/openmpi/1.4-gcc/ directory. I know th
Hi,
I wanna know if it exist a implementation that permit to run a single host
process on the master of the cluster, that will then spawn 1 process per -np
X defined thread at the host specified in the host list. The host will then
act as a syncronized sender/collecter of the work done.
It would
All,
I am running Scientific Linux 5.5, with OpenMPI 1.4 installed into the /usr/lib/openmpi/1.4-gcc/
directory. I know this is typically /opt/openmpi, but Red Hat does things differently. I have my
PATH and LD_LIBRARY_PATH set correctly; because the test program does compile and run.
The clu
Dear OpenMPI,
Has there been any consideration of porting OpenMPI to the ARM
processor?
Plans are afoot to launch 7 ARM processors on a "Stage Coach" card in
a 3U
CubeSat. NASA's NMP (New Millennium Program) ST-8 (Space Technology 8)
DM (Dependable Multiprocessor) uses OpenMPI as the foun
All,
I was not expecting things to work, and find that codes compiled using
OpenMPI 1.4.1 commands under SLES 10.2 produce the following message
when run under SLES11:
mca: base: component_find: unable to open
/share/apps/openmpi-intel/1.4.1/lib/openmpi/mca_btl_openib: perhaps a missing
symbol