It depends on what you are trying to do. Is your network physically
wired such that there is no direct link between nodes 1 and 2? (i.e.,
node 1 cannot directly send to node 2, such as via opening a socket
from node 1 to node 2's IP address)
MPI topology communicators do not prohibit on p
Sorry for not replying earlier.
I'm not a SCALAPACK expert, but a common mistake I've seen users make
is to use the mpif.h from a different MPI implementation when
compiling their fortran programs. Can you verify that you're getting
the Open MPI mpif.h?
Also, there is a known problem tha
On Jan 23, 2008, at 3:12 PM, David Gunter wrote:
Do I need to do anything special to enable multi-path routing on
InfiniBand networks? For example, are there command-line arguments to
mpiexec or the like?
It depends on what you mean by multi-path routing. For each MPI peer
pair (e.g., pro
Hi,
I compiled a molecular dynamics program DLPOLY3.09 on an AMD64 cluster running
openmpi 1.2.4 with Portland group compilers.The program seems to run alright,
however, each processor outputs:
ADIOI_GEN_DELETE (line 22): **io No such file or directory
It was suggested that it was an issue with
I'm unable to reproduce this problem. :( I tried both the svn head
(r17288) and the tarball that you were using (openmpi-1.3a1r17175) on
a similar system without problem.
The error you are seeing may be caused by old connectivity
information in the session directory. You may want to make su