David -
You're correct - adding --enable-static (or it's file equivalent enable_static)
causes components to be linked into libmpi instead of left as individual
components. This is probably a bug, but it's what Open MPI's done for it's
entire life, so it's unlikely to change. Removing the ena
Hi Jeff,
Thanks for the response. Reviewing my builds, I realized that for
1.4.2, I had configured using
contrib/platform/lanl/tlcc/optimized-nopanasas
per Ralph Castain's suggestion. That file includes both:
enable_dlopen=no
enable_shared=yes
enable_static=yes
Here is my *real* issue. I a
On 10/05/2010 10:23 AM, Storm Zhang wrote:
Sorry, I should say one more thing about the 500 procs test. I tried
to run two 500 procs at the same time using SGE and it runs fast and
finishes at the same time as the single run. So I think OpenMPI can
handle them separately very well.
For the b
Sorry, I should say one more thing about the 500 procs test. I tried to run
two 500 procs at the same time using SGE and it runs fast and finishes at
the same time as the single run. So I think OpenMPI can handle them
separately very well.
For the bind-to-core, I tried to run mpirun --help but not
It is more than likely that you compiled Open MPI with --enable-static and/or
--disable-dlopen. In this case, all of Open MPI's plugins are slurped up into
the libraries themselves (e.g., libmpi.so or libmpi.a). That's why everything
continues to work properly.
On Oct 4, 2010, at 6:58 PM, Da
>
> It looks to me like your remote nodes aren't finding the orted executable. I
> suspect the problem is that you need to forward the path and ld_library_path
> tot he remove nodes. Use the mpirun -x option to do so.
Hi, problem sorted. It was actually caused by the system I currently use t