Re: [OMPI users] getting opal_init:startup:internal-failure

2013-04-28 Thread Ralph Castain
If you configure/build OMPI on the remote node using the same configure options you used on host1, does the problem go away? On Apr 28, 2013, at 8:58 AM, E.O. wrote: > Thank you Ralph! > I ran it with "-prefix" option but I got this... > > [root@host1 tmp]# mpirun -prefix /myname -np 4 -host

Re: [OMPI users] getting opal_init:startup:internal-failure

2013-04-28 Thread E.O.
Thank you Ralph! I ran it with "-prefix" option but I got this... [root@host1 tmp]# mpirun -prefix /myname -np 4 -host host2 ./hello.out -- mpirun was unable to launch the specified application as it could not access or execut

Re: [OMPI users] getting opal_init:startup:internal-failure

2013-04-28 Thread Ralph Castain
The problem is likely that your path variables aren't being set properly on the remote machine when mpirun launches the remote daemon. You might check to see that your default shell rc file is also setting those values correctly. Alternatively, modify your mpirun cmd line a bit by adding mpirun

[OMPI users] getting opal_init:startup:internal-failure

2013-04-28 Thread E.O.
Hello I have five linux machines (one is redhat and the other are busybox) I downloaded openmpi-1.6.4.tar.gz into my main redhat machine and configure'ed/compiled it successfully. ./configure --prefix=/myname I installed it to /myname directory successfully. I am able to run a simple hallo.c on my

Re: [OMPI users] Strange "All-to-All" behavior

2013-04-28 Thread Sebastian Rettenberger
Hi, hast du das Problem nur mit OpenMPI oder auch mit anderen MPI Bibliotheken (z.B. MPICH2) Ansonsten kannst du auch mal probieren, ob du das All-to-all mit Collectives hin bekommst, z.B. Scatter oder Gatter. Viele Grüße Sebastian > Hi, > > I have encountered really bad performance when all