Hi,
When trying to execute an application that spawns to another node, I
obtain the following message:
# ./mpirun --hostfile /root/hostfile -np 2 greetings
Syntax error: "(" unexpected (expecting ")")
--
Could not execute
Hi,
I have a question regarding merging intracommunicators.
Using MPI_Spawn, I create on designated machines child processes,
retrieving an intercommunicator each time.
With MPI_Intercomm_Merge it is possible to get an intracommunicator
containing the master process(es) and the newly spawned child
Hello,
configure:33918: gcc -DNDEBUG -O2 -g -pipe -m32 -march=i386
-mtune=pentium4 -fno-strict-aliasing -I. -c conftest.c
configure:33925: $? = 0
configure:33935: gfortran conftestf.f90 conftest.o -o conftest
/usr/bin/ld: warning: i386 architecture of input file `conftest.o' is
incompatible with
Yes, a memory bug has been my primary focus due to the not entirely
consistent nature of this problem; I valgrind'ed the app a number of
times, to no avail though. Will post again if anything new comes up...
Thanks!
Jeff Squyres wrote:
Yes, that's the normal progression. For some reason, OMPI
On Oct 18, 2007, at 9:24 AM, Marcin Skoczylas wrote:
PML add procs failed
--> Returned "Unreachable" (-12) instead of "Success" (0)
--
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_AR
Yes, that's the normal progression. For some reason, OMPI appears to
have decided that it had not yet received the message. Perhaps a
memory bug in your application...? Have you run it through valgrind,
or some other memory-checking debugger, perchance?
On Oct 18, 2007, at 12:35 PM, Dani
Ah, I see the real problem: your C and Fortran compilers are not
generating compatible code. Here's the relevant snipit from config.log:
configure:33849: checking size of Fortran 90 LOGICAL
configure:33918: gcc -DNDEBUG -O2 -g -pipe -m32 -march=i386 -
mtune=pentium4 -fno-strict-aliasing -I. -
Attached is the requested info. There's not much here, though...it
dies pretty early in.
--Jim
On 10/17/07, Jeff Squyres wrote:
> On Oct 17, 2007, at 12:35 PM, Jim Kusznir wrote:
>
> > checking if Fortran 90 compiler supports LOGICAL... yes
> > checking size of Fortran 90 LOGICAL... ./configure
Unfortunately, so far I haven't even been able to reproduce it on a
different cluster. Since I had no success getting to the bottom of this
problem, I've been concentrating my efforts on changing the app so that
there's no need to send very large messages; I might be able to find
time later to
Hello,
I'm having troubles to run my software after our administrators changed
the cluster configuration. It was working perfectly before, however now
I get these errors:
$ mpirun --hostfile ./../hostfile -np 10 ./src/smallTest
-
On Oct 18, 2007, at 7:56 AM, Gleb Natapov wrote:
Open MPI v1.2.4 (and newer) will get around 1.5us latency with 0 byte
ping-pong benchmarks on Mellanox ConnectX HCAs. Prior versions of
Open MPI can also achieve this low latency by setting the
btl_openib_use_eager_rdma MCA parameter to 1.
Actu
On Wed, Oct 17, 2007 at 05:43:14PM -0400, Jeff Squyres wrote:
> Several users have noticed poor latency with Open MPI when using the
> new Mellanox ConnectX HCA hardware. Open MPI was getting about 1.9us
> latency with 0 byte ping-pong benchmarks (e.g., NetPIPE or
> osu_latency). This has b
These programs are mainly for internal testing of Open MPI, and are
actually being phased out. We don't actively test them anymore, so I
can't vouch for how well they'll work or not.
A top-level "make test" used to make them.
On Oct 18, 2007, at 4:44 AM, Neeraj Chourasia wrote:
Hi all,
On 18 Oct 2007 08:44:36 -, Neeraj Chourasia
wrote:
>
> Hi all,
>
> Could someone suggest me, how to compile programs given in test
> directory of the source code? There are couple of directories within test
> which contains sample programs about the usage of datastructures being used
> by
Hi all, Could someone suggest me, how to compile programs
given in test directory of the source code? There are couple of directories
within test which contains sample programs about the usage of datastructures
being used by open-MPI. I am able to compile some of the directories at it was
ha
15 matches
Mail list logo