[OMPI users] Syntax error in remote rsh execution

2007-10-18 Thread Jorge Parra
Hi, When trying to execute an application that spawns to another node, I obtain the following message: # ./mpirun --hostfile /root/hostfile -np 2 greetings Syntax error: "(" unexpected (expecting ")") -- Could not execute

[OMPI users] Merging Intracommunicators

2007-10-18 Thread Murat Knecht
Hi, I have a question regarding merging intracommunicators. Using MPI_Spawn, I create on designated machines child processes, retrieving an intercommunicator each time. With MPI_Intercomm_Merge it is possible to get an intracommunicator containing the master process(es) and the newly spawned child

Re: [OMPI users] Compiling OpenMPI for i386 on a x86_64

2007-10-18 Thread Gurhan
Hello, configure:33918: gcc -DNDEBUG -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -fno-strict-aliasing -I. -c conftest.c configure:33925: $? = 0 configure:33935: gfortran conftestf.f90 conftest.o -o conftest /usr/bin/ld: warning: i386 architecture of input file `conftest.o' is incompatible with

Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-18 Thread Daniel Rozenbaum
Yes, a memory bug has been my primary focus due to the not entirely consistent nature of this problem; I valgrind'ed the app a number of times, to no avail though. Will post again if anything new comes up... Thanks! Jeff Squyres wrote: Yes, that's the normal progression. For some reason, OMPI

Re: [OMPI users] which alternative to OpenMPI should I choose?

2007-10-18 Thread Jeff Squyres
On Oct 18, 2007, at 9:24 AM, Marcin Skoczylas wrote: PML add procs failed --> Returned "Unreachable" (-12) instead of "Success" (0) -- *** An error occurred in MPI_Init *** before MPI was initialized *** MPI_ERRORS_AR

Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-18 Thread Jeff Squyres
Yes, that's the normal progression. For some reason, OMPI appears to have decided that it had not yet received the message. Perhaps a memory bug in your application...? Have you run it through valgrind, or some other memory-checking debugger, perchance? On Oct 18, 2007, at 12:35 PM, Dani

Re: [OMPI users] Compiling OpenMPI for i386 on a x86_64

2007-10-18 Thread Jeff Squyres
Ah, I see the real problem: your C and Fortran compilers are not generating compatible code. Here's the relevant snipit from config.log: configure:33849: checking size of Fortran 90 LOGICAL configure:33918: gcc -DNDEBUG -O2 -g -pipe -m32 -march=i386 - mtune=pentium4 -fno-strict-aliasing -I. -

Re: [OMPI users] Compiling OpenMPI for i386 on a x86_64

2007-10-18 Thread Jim Kusznir
Attached is the requested info. There's not much here, though...it dies pretty early in. --Jim On 10/17/07, Jeff Squyres wrote: > On Oct 17, 2007, at 12:35 PM, Jim Kusznir wrote: > > > checking if Fortran 90 compiler supports LOGICAL... yes > > checking size of Fortran 90 LOGICAL... ./configure

Re: [OMPI users] MPI_Probe succeeds, but subsequent MPI_Recv gets stuck

2007-10-18 Thread Daniel Rozenbaum
Unfortunately, so far I haven't even been able to reproduce it on a different cluster. Since I had no success getting to the bottom of this problem, I've been concentrating my efforts on changing the app so that there's no need to send very large messages; I might be able to find time later to

[OMPI users] which alternative to OpenMPI should I choose?

2007-10-18 Thread Marcin Skoczylas
Hello, I'm having troubles to run my software after our administrators changed the cluster configuration. It was working perfectly before, however now I get these errors: $ mpirun --hostfile ./../hostfile -np 10 ./src/smallTest -

Re: [OMPI users] IB latency on Mellanox ConnectX hardware

2007-10-18 Thread Jeff Squyres
On Oct 18, 2007, at 7:56 AM, Gleb Natapov wrote: Open MPI v1.2.4 (and newer) will get around 1.5us latency with 0 byte ping-pong benchmarks on Mellanox ConnectX HCAs. Prior versions of Open MPI can also achieve this low latency by setting the btl_openib_use_eager_rdma MCA parameter to 1. Actu

Re: [OMPI users] IB latency on Mellanox ConnectX hardware

2007-10-18 Thread Gleb Natapov
On Wed, Oct 17, 2007 at 05:43:14PM -0400, Jeff Squyres wrote: > Several users have noticed poor latency with Open MPI when using the > new Mellanox ConnectX HCA hardware. Open MPI was getting about 1.9us > latency with 0 byte ping-pong benchmarks (e.g., NetPIPE or > osu_latency). This has b

Re: [OMPI users] Compile test programs

2007-10-18 Thread Jeff Squyres
These programs are mainly for internal testing of Open MPI, and are actually being phased out. We don't actively test them anymore, so I can't vouch for how well they'll work or not. A top-level "make test" used to make them. On Oct 18, 2007, at 4:44 AM, Neeraj Chourasia wrote: Hi all,

Re: [OMPI users] Compile test programs

2007-10-18 Thread Amit Kumar Saha
On 18 Oct 2007 08:44:36 -, Neeraj Chourasia wrote: > > Hi all, > > Could someone suggest me, how to compile programs given in test > directory of the source code? There are couple of directories within test > which contains sample programs about the usage of datastructures being used > by

[OMPI users] Compile test programs

2007-10-18 Thread Neeraj Chourasia
Hi all,    Could someone suggest me, how to compile programs given in test directory of the source code? There are couple of directories within test which contains sample programs about the usage of datastructures being used by open-MPI. I am able to compile some of the directories at it was ha