Hi,
First, consider to update to newer OpenMPI.
Second, look on your environment on the box you startts OpenMPI (runs
mpirun ...).
Type
ulimit -n
to explore how many file descriptors your envirinment have. (ulimit -a
for all limits). Note, every process on older versions of OpenMPI (prior
1
I just installed Open MPI on our cluster and whenever I try to execute
a process on more than one node, I get this error:
$ mpirun -hostfile $HOSTFILE -n 1 hello_c
orted: error while loading shared libraries: libimf.so: cannot open
shared object file: No such file or directory
... followed by
On Tue, Sep 9, 2008 at 9:52 AM, Christopher Tanner
wrote:
> I just installed Open MPI on our cluster and whenever I try to execute a
> process on more than one node, I get this error:
>
> $ mpirun -hostfile $HOSTFILE -n 1 hello_c
> orted: error while loading shared libraries: libimf.so: cannot ope
Hi Jeff/Paul,
Thanks a lot for your replies.
I am looking into upgrading MPI to a newer version. As I use a few custom
built libraries as part of my main parallel application that recommend the
use of 1.1.2, I first need to check compatibility issues with the newer
version before I can upgrade.
Jeremy -
Thanks for the help - this bit of advice came up quite a bit through
internet searches. However, I made sure that the LD_LIBRARY_PATH was
set and correct on all nodes -- and the error persists.
Any other possible solutions? Thanks.
---
Chris
You might want to double check this; it's an easy thing to test
incorrectly.
What you want to check is that the LD_LIBRARY_PATH is set properly for
*non-interactive logins* (I assume you are using the rsh/ssh launcher
for Open MPI, vs. using a resource manager such as SLURM, Torque,
etc.)
Jeremy -
I think I've found the problem / solution. With Ubuntu, there's a
program called 'ldconfig' that updates the dynamic linker run-time
bindings. Since Open MPI was compiled to use dynamic linking, these
have to be updated. Thus, these commands have to be run on all of the
nodes
$
On clusters where I'm using the Intel compilers and OpenMPI, I setup
the compiler directory (usually /opt/intel) as a NFS export. The
computation nodes then mount that export. Next, I add the following
lines to the ld.so.conf file and distribute it to the computation
nodes:
/opt/intel/cce/version_n
On Sep 9, 2008, at 3:05 PM, Christopher Tanner wrote:
I think I've found the problem / solution. With Ubuntu, there's a
program called 'ldconfig' that updates the dynamic linker run-time
bindings. Since Open MPI was compiled to use dynamic linking, these
have to be updated. Thus, these comm
mpirun under OpenMPI is not picking the limit settings from the user
environment. Is there a way to do this, short of wrapping my executable
in a script where my limits are set and then invoking mpirun on that script?
Thanks.
-Hamid
There are several factors that can come into play here. See this FAQ
entry about registered memory limits (the same concepts apply to the
other limits):
http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages-more
On Sep 9, 2008, at 7:04 PM, Amidu Oloso wrote:
mpirun under O
11 matches
Mail list logo