On Mar 24, 2011, at 12:45 PM, ya...@adina.com wrote:

> Thanks for your information. For my Open MPI installation, actually 
> the executables such as mpirun and orted are dependent on those 
> dynamic intel libraries, when I use ldd on mpirun, some dynamic 
> libraries show up. I am trying to make these open mpi executables 
> statically linked with these intel libraries, but it shows no progress 
> even if I use "--with-gnu-ld" with specific static intel libraries set in 
> LIBS when I configure open mpi 1.4.3 installation. It seems there 
> are something for the compiling process of open mpi 1.4.3 that I do 
> not have control, or I just missed something. I will try different 
> things, and will report here once I have a confirmative conclusion. 
> However, any hints or information on how to make open mpi 
> executables statically linked to intel libs through intel compilers are 
> very welcomed. Thanks!

I can't speak to this one, but perhaps Jeff can/will.

> 
> As for the issue that environment variables set in a script do not 
> propagate to remote slave nodes, I use rsh connection for 
> simplicity. If I set PATH and LD_LIBRARY_PATH in ~/.bashrc 
> (which shared by all nodes, master or slave), my MPI application 
> does work as expected, and this confirms Ralph's suggestions. 
> The thing is that I just want to avoid set the environment variables in 
> .bashrc or .porfile file, but instead, set them in the script, and want 
> these environment variables propagating to other slave nodes 
> when I do mpirun, as I could do for MPICH. I also try use the prefix 
> path before mpirun when I do mpirun, as suggested by Jeff, it does 
> not work either. Any hints to solve this issue?

You can use the -x option to mpirun:

mpirun -x PATH -x LD_LIBRARY_PATH ...

will pickup those envars and forward them. See "mpirun -h" for more info.

> 
> Thanks,
> Yiguang
> 
> 
> On 23 Mar 2011, at 12:00, users-requ...@open-mpi.org wrote:
> 
>> On Mar 21, 2011, at 8:21 AM, ya...@adina.com wrote:
>> 
>>> The issue is that I am trying to build open mpi 1.4.3 with intel
>>> compiler libraries statically linked to it, so that when we run
>>> mpirun/orterun, it does not need to dynamically load any intel
>>> libraries. But what I got is mpirun always asks for some intel
>>> library(e.g. libsvml.so) if I do not put intel library path on
>>> library search path($LD_LIBRARY_PATH). I checked the open mpi user
>>> archive, it seems only some kind user mentioned to use
>>> "-i-static"(in my case) or "-static-intel" in ldflags, this is what
>>> I did, but it seems not working, and I did not get any confirmation
>>> whether or not this works for anyone else from the user archive.
>>> could anyone help me on this? thanks!
>> 
>> Is it Open MPI's executables that require the intel shared libraries
>> at run time, or your application?  Keep in mind the difference:
>> 
>> 1. Compile/link flags that you specify to OMPI's configure script are
>> used to compile/link Open MPI itself (including executables such as
>> mpirun).
>> 
>> 2. mpicc (and friends) use a similar-but-different set of flags to
>> compile and link MPI applications.  Specifically, we try to use the
>> minimal set of flags necessary to compile/link, and let the user
>> choose to add more flags if they want to.  See this FAQ entry for more
>> details:
>> 
>>    http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-a
>>    fter -v1.0
>> 
>>> (2) After compiling and linking our in-house codes  with open mpi
>>> 1.4.3, we want to make a minimal list of executables for our codes
>>> with some from open mpi 1.4.3 installation, without any dependent on
>>> external setting such as environment variables, etc.
>>> 
>>> I orgnize my directory as follows:
>>> 
>>> parent---
>>>           |
>>>             package
>>>             |
>>>             bin  
>>>             |
>>>             lib
>>>             |
>>>             tools
>>> 
>>> In package/ directory are executables from our codes. bin/ has
>>> mpirun and orted, copied from openmpi installation. lib/ includes
>>> open mpi libraries, and intel libraries. tools/ includes some
>>> c-shell scripts to launch mpi jobs, which uses mpirun in bin/.
>> 
>> FWIW, you can use the following OMPI options to configure to eliminate
>> all the OMPI plugins (i.e., locate all that code up in libmpi and
>> friends, vs. being standalone-DSOs):
>> 
>>    --disable-shared --enable-static
>> 
>> This will make libmpi.a (vs. libmpi.so and a bunch of plugins) which
>> your application can statically link against.  But it does make a
>> larger executable.  Alternatively, you can:
>> 
>>    --disable-dlopen
>> 
>> (instead of disable-shared/enable-static) which will make a giant
>> libmpi.so (vs. libmpi.so and all the plugin DSOs).  So your MPI app
>> will still dynamically link against libmpi, but all the plugins will
>> be physically located in libmpi.so vs. being dlopen'ed at run time.
>> 
>>> The parent/ directory is on a NFS shared by all nodes of the 
>>> cluster. In ~/.bashrc(shared by all nodes too), I clear PATH and
>>> LD_LIBRARY_PATH without direct to any directory of open mpi 1.4.3
>>> installation. 
>>> 
>>> First, if I set above bin/ directory  to PATH and lib/ 
>>> LD_LIBRARY_PATH in ~/.bashrc, our parallel codes(starting by the C
>>> shell script in tools/) run AS EXPECTED without any problem, so that
>>> I set other things right.
>>> 
>>> Then again, to avoid modifying ~/.bashrc or ~/.profile, I set bin/
>>> to PATH and lib/ to LD_LIBRARY_PATH in the C shell script under
>>> tools/ directory, as:
>>> 
>>> setenv PATH /path/to/bin:$PATH
>>> setenv LD_LIBRARY_PATH /path/to/lib:$LD_LIBRARY_PATH
>> 
>> Instead, you might want to try:
>> 
>>   /path/to/mpirun ...
>> 
>> which will do the same thing as mpirun's --prefix option (see
>> mpirun(1) for details here), and/or use the
>> --enable-mpi-prefix-by-default configure option.  This option, as is
>> probably pretty obvious :-), makes mpirun behave as if the --prefix
>> option was specified on the command line, with an argument equal to
>> the $prefix from configure.
>> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to