I've tried a bunch of variations on this, but I'm actually getting stymied by 
my underlying OS not supporting static linking properly.  :-\

I do see that Libtool is stripping out the "-static" standalone flag that you 
passed into LDFLAGS.  Yuck.  What's -Wl,-E?  Can you try "-Wl,-static" instead?


On Jan 25, 2012, at 1:24 AM, Ilias Miroslav wrote:

> Hello again,
> 
> I need own static "mpirun" for porting (together with the static executable) 
> onto various (unknown) grid servers. In grid computing one can not expect 
> OpenMPI-ILP64 installtion on each computing element. 
> 
> Jeff: I tried LDFLAGS in configure
> 
> ilias@194.160.135.47:~/bin/ompi-ilp64_full_static/openmpi-1.4.4/../configure 
> --prefix=/home/ilias/bin/ompi-ilp64_full_static -without-memory-manager 
> --without-libnuma --enable-static --disable-shared CXX=g++ CC=gcc 
> F77=gfortran FC=gfortran FFLAGS="-m64 -fdefault-integer-8 -static" 
> FCFLAGS="-m64 -fdefault-integer-8 -static" CFLAGS="-m64 -static" 
> CXXFLAGS="-m64 -static"  LDFLAGS="-static  -Wl,-E" 
> 
> but still got dynamic, not static "mpirun":
> ilias@194.160.135.47:~/bin/ompi-ilp64_full_static/bin/.ldd ./mpirun
>       linux-vdso.so.1 =>  (0x00007fff6090c000)
>       libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd7277cf000)
>       libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007fd7275b7000)
>       libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007fd7273b3000)
>       libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fd727131000)
>       libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
> (0x00007fd726f15000)
>       libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd726b90000)
>       /lib64/ld-linux-x86-64.so.2 (0x00007fd7279ef000)
> 
> Any help please ? config.log is here:
> 
> https://docs.google.com/open?id=0B8qBHKNhZAipNTNkMzUxZDEtNjJmZi00YzY3LWI4MmYtY2RkZDVkMjhiOTM1
> 
> Best, Miro
> ------------------------------
> Message: 10
> Date: Tue, 24 Jan 2012 11:55:21 -0500
> From: Jeff Squyres <jsquy...@cisco.com>
> Subject: Re: [OMPI users] pure static "mpirun" launcher
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <a86d3721-9bf8-4a7d-b745-32e606521...@cisco.com>
> Content-Type: text/plain; charset=windows-1252
> 
> Ilias: Have you simply tried building Open MPI with flags that force static 
> linking?  E.g., something like this:
> 
>  ./configure --enable-static --disable-shared LDFLAGS=-Wl,-static
> 
> I.e., put in LDFLAGS whatever flags your compiler/linker needs to force 
> static linking.  These LDFLAGS will be applied to all of Open MPI's 
> executables, including mpirun.
> 
> 
> On Jan 24, 2012, at 10:28 AM, Ralph Castain wrote:
> 
>> Good point! I'm traveling this week with limited resources, but will try to 
>> address when able.
>> 
>> Sent from my iPad
>> 
>> On Jan 24, 2012, at 7:07 AM, Reuti <re...@staff.uni-marburg.de> wrote:
>> 
>>> Am 24.01.2012 um 15:49 schrieb Ralph Castain:
>>> 
>>>> I'm a little confused. Building procs static makes sense as libraries may 
>>>> not be available on compute nodes. However, mpirun is only executed in one 
>>>> place, usually the head node where it was built. So there is less reason 
>>>> to build it purely static.
>>>> 
>>>> Are you trying to move mpirun somewhere? Or is it the daemons that mpirun 
>>>> launches that are the real problem?
>>> 
>>> This depends: if you have a queuing system, the master node of a parallel 
>>> job may be one of the slave nodes already where the jobscript runs. 
>>> Nevertheless I have the nodes uniform, but I saw places where it wasn't the 
>>> case.
>>> 
>>> An option would be to have a special queue, which will execute the 
>>> jobscript always on the headnode (i.e. without generating any load) and use 
>>> only non-local granted slots for mpirun. For this it might be necssary to 
>>> have a high number of slots on the headnode for this queue, and request 
>>> always one slot on this machine in addition to the necessary ones on the 
>>> computing node.
>>> 
>>> -- Reuti
>>> 
>>> 
>>>> Sent from my iPad
>>>> 
>>>> On Jan 24, 2012, at 5:54 AM, Ilias Miroslav <miroslav.il...@umb.sk> wrote:
>>>> 
>>>>> Dear experts,
>>>>> 
>>>>> following http://www.open-mpi.org/faq/?category=building#static-build I 
>>>>> successfully build static OpenMPI library.
>>>>> Using such prepared library I succeeded in building parallel static 
>>>>> executable - dirac.x (ldd dirac.x-not a dynamic executable).
>>>>> 
>>>>> The problem remains, however,  with the mpirun (orterun) launcher.
>>>>> While on the local machine, where I compiled both static OpenMPI & static 
>>>>> dirac.x  I am able to launch parallel job
>>>>> <OpenMPI_static>/mpirun -np 2 dirac.x ,
>>>>> I can not lauch it elsewhere, because "mpirun" is dynamically linked, 
>>>>> thus machine dependent:
>>>>> 
>>>>> ldd mpirun:
>>>>>    linux-vdso.so.1 =>  (0x00007fff13792000)
>>>>>    libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f40f8cab000)
>>>>>    libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007f40f8a93000)
>>>>>    libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f40f888f000)
>>>>>    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f40f860d000)
>>>>>    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
>>>>> (0x00007f40f83f1000)
>>>>>    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f40f806c000)
>>>>>    /lib64/ld-linux-x86-64.so.2 (0x00007f40f8ecb000)
>>>>> 
>>>>> Please how to I build "pure" static mpirun launcher, usable (in my case 
>>>>> together with static dirac.x) also on other computers  ?
>>>>> 
>>>>> Thanks, Miro
>>>>> 
>>>>> --
>>>>> RNDr. Miroslav Ilia?, PhD.
>>>>> 
>>>>> Katedra ch?mie
>>>>> Fakulta pr?rodn?ch vied
>>>>> Univerzita Mateja Bela
>>>>> Tajovsk?ho 40
>>>>> 97400 Bansk? Bystrica
>>>>> tel: +421 48 446 7351
>>>>> email : miroslav.il...@umb.sk
>>>>> 
>>>>> Department of Chemistry
>>>>> Faculty of Natural Sciences
>>>>> Matej Bel University
>>>>> Tajovsk?ho 40
>>>>> 97400 Banska Bystrica
>>>>> Slovakia
>>>>> tel: +421 48 446 7351
>>>>> email :  miroslav.il...@umb.sk
>>>>> 
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> 
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> 
>>> 
>>> 
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> 
> 
> ------------------------------
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> End of users Digest, Vol 2133, Issue 1
> **************************************
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to