Finally got a second development system procured and up and running.Got things working pretty wellon one node, installed on the second and things work there.This may be a similar problem to Charles Shuller's a month ago, maybe not. Tried to launch a program from node work on node work2:C:\prog\mon\
For some reason the OpenMPI content filter tossed this message, so I'm sending it again:___Finally got a second development system procured and up and running.Got things working pretty well on one node, installed on the second and things work
I have a script that launches a bunch of runs on some compute nodes of
a cluster. Once I get through the queue, I query PBS for my machine
file, then I copy that to a local file 'nodes' which I use for mpiexec:
mpiexec -machinefile /home/research/cary/projects/vpall/vptests/nodes
-np 6 /hom
e/r
Hi John,
Mpiexec isn't needed with OMPI, in fact if you are using the one from
OSC, it only works with MPICH.
Instead just build OMPI with --with-tm, and it will link against
TORQUE and start up and track jobs properly.
-Joshua Bernstein
Penguin Computing
On Mar 14, 2010, at 21:35, "John
Hi,
My problem is: I installed openmpi 1.2.9, with mvapi support, but the
execution is done on ethernet, unless I use ipoib.
I built openmpi with the command:
./configure CC=/home/pgi/linux86-64/6.2/bin/pgcc
CXX=/home/pgi/linux86-64/6.2/bin/pgCC
FC=/home/pgi/linux86-64/6.2/bin/pgf90
--without
Just to clarify: OMPI is launched with either mpirun or mpiexec commands, so
long as your path is pointing to the correct OMPI installation. This looks like
that is the case as the error message comes from us.
It really, really helps if you tell us what version of OMPI you are using. Some
older
I'm afraid I'm confused by your question, so please clarify: the hostnames that
have ethernet interfaces do NOT have infiniband interfaces on them? If so, why
would you expect OMPI to use infiniband when you execute on those hosts? Or do
they contain an infiniband interface in addition to the et
I'm trying to build Open MPI version 1.4.1 on a Sun Blade 1000 running Solaris
10 03/2005 using gcc 4.4.3. But the build is not working properly. Any help
appreciated. (BTW, gcc is configured to use the Sun linker and assembler - not
the GNU ones.)
$ configure
$ make
both run ok, but
$ make
Oops, I forgot to attach config.log.gz
Dr. David Kirkby wrote:
I'm trying to build Open MPI version 1.4.1 on a Sun Blade 1000 running
Solaris 10 03/2005 using gcc 4.4.3. But the build is not working
properly. Any help appreciated. (BTW, gcc is configured to use the Sun
linker and assembler - n