[OMPI users] Problem with linking on OS X
Hi I spent some time today trying to install OpenMPI 1.3.3 on OS X 10.5.8. I need to used threading and asynchronous progress, hence the pre-installed OpenMPI is not sufficient. Anyhow - whatever I did (and I tried many things) - my applications is getting linked against the default /usr/lib/ OpenMPI, rather than against /opt/openmpi/lib version. I installed the software using those instructions: http://www.open-mpi.org/faq/?category=osx#osx-bundled-ompi and, when it didn't work properly, I tired: 1. Using DYLD_LIBRARY_PATH 2. passing some ./configure --with-wrapper-ldflags="-L/opt/openmpi/lib" 3. passing some ./configure --with-wrapper-ldflags="-rpath /opt/ openmpi/lib" 4. hand compilation with cc -L/opt/openmpi/lib -lmpi 2 and 3 did not work (ld error=22) With 1 and 2 my code still gets linked with /usr/lib/libmpi... Note, that the /opt/openmpi/bin path is properly set and ompi_info does outputs the right info. Any hints will be appreciated Tomek
Re: [OMPI users] Problem with linking on OS X
Hi Jody What is the result of "which mpicc" (or whatever you are using for your compiling/linking? I'm pretty sure that's where the library paths get set, and if you are calling /usr/bin/mpicc you will get the wrong library paths in the executable. Here you are: $ which mpicc /opt/openmpi/bin/mpicc While: $ otool -L solfec-mpi solfec-mpi: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.4) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/ vecLib.framework/Versions/A/libBLAS.dylib (compatibility version 1.0.0, current version 218.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/ vecLib.framework/Versions/A/libLAPACK.dylib (compatibility version 1.0.0, current version 218.0.0) /System/Library/Frameworks/Python.framework/Versions/2.5/Python (compatibility version 2.5.0, current version 2.5.1) /usr//lib/libmpi.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr//lib/libopen-rte.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr//lib/libopen-pal.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libutil.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) 4. hand compilation with cc -L/opt/openmpi/lib -lmpi Did 4 work? No, it didn't. Thanks
Re: [OMPI users] Problem with linking on OS X
OK - I have fixed it by including -L/opt/openmpi/lib at the very beginning of mpicc ... -L/opt/openmpi/lib -o app.exe the rest ... But something is wrong with dyld anyhow. On 19 Aug 2009, at 21:04, Jody Klymak wrote: Hi Tomek, I'm using 10.5.7, and just went through a painful process that we thought was library related (but it wasn't), so I'll give my less- than-learned response, and if you sill have difficulties hopefully others will chime in: What is the result of "which mpicc" (or whatever you are using for your compiling/linking? I'm pretty sure that's where the library paths get set, and if you are calling /usr/bin/mpicc you will get the wrong library paths in the executable. On Aug 19, 2009, at 10:57 AM, tomek wrote:
[OMPI users] OpenMPI extremely slow with progress threads
Hi Now that I have complied my code with OpenMPI 1.3.3 here is a new problem: When asynchronous progress is enabled even a simplest test problems run extremely slowly. Is this a common issue? Tomek
[OMPI users] Blocking communication a thread better then asynchronous progress?
Hi Since I discovered, that asynchronous progress does not work too well, here is my question: Is doing blocking communication in a separate thread better then asynchronous progress? (At least as a workaround until the proper implementation gets improved) Of course I will test it, but perhaps somebody has some experience already. Tomek
Re: [OMPI users] OpenMPI extremely slow with progress threads
Hi Jeff I am using OMPI on a MacBook Pro (for the moment). What's "extremely slowly", and what does your test program do? For example, test programs of Zoltan (load balancing library from sandia) never finish, while normally they take a fraction of second to finish. By "asynchronous progress", do you mean that you used the --enable- progress-threads option to OMPI's configure, or that you are using non-blocking MPI function calls? I meant using --enable-progress-threads. When disabled - doesn't it mean that block and non-blocking communication are basically same (and blocking)? (at least on a gigabit ethernet TCPIP based network) I'd say that the progress threads stuff in OMPI is immature at best. At worst, it may crash. It's likely very untested. Yes, I could see that myself. The non-blocking function calls should work just as well as the blocking function calls -- depending on your application, hardware, communication patterns, etc., you can get significant speedup by using the non-blocking communication calls. I am not very knowledgeable in terms of networking, but is gigabit ethernet/TCPIP capable of asynchronous comm? FWIW, some types of networks effectively have asynchronous progress anyway (which is one of the reasons we haven't done too much on the OMPI software side of enabling async. progress). If your network has hardware (or software) offload of message passing, then you might be getting it "for free" by OMPI's normal operating modes anyway. Note that asynchronous progress is typically most useful when sending large messages. Will need to learn more and see on which from the available clusters should I be able to have good support for asynch. which is needed in my application.
Re: [OMPI users] Blocking communication a thread better thenasynchronous progress?
Is doing blocking communication in a separate thread better then asynchronous progress? (At least as a workaround until the proper implementation gets improved) At the moment, yes. OMPI's asynchronous progress is "loosely tested" (at best). OMPI's threading support is somewhat stable for some devices (e.g., not OpenFabrics-based networks), but it's still somewhat new, so feedback would be welcome here. I quickly modified my asynchronous code (Isend - Recv) to have a pthread doing Irecv - Rsend for point to point collective communication (which normally works fine for me). But still - on my wee laptop - the thread / blocking comm. combination is quite slow. I will play around with it to see whether it can be improved. Of course, laptop is not a good device for a very conclusive testing, but if things are too slow here, I wouldn't expect them to work great elsewhere. I wonder whether the OMPI 1.3 on OS X Leopard will have MPI threads enabled (the 1.2.8 version on Darwin doesn't)...
[OMPI users] undefined reference to `MPI::Comm::Comm()
Hi, I am trying to locally compile software which uses openmpi (1.6.3), but I got this error: restraint_camshift2.o:(.toc+0x98): undefined reference to `ompi_mpi_cxx_op_intercept' restraint_camshift2.o: In function `Intracomm': /home/users/didymos/openmpi-1.6.3/include/openmpi/ompi/mpi/cxx/intracomm.h:25: undefined reference to `MPI::Comm::Comm()' /home/users/didymos/openmpi-1.6.3/include/openmpi/ompi/mpi/cxx/intracomm.h:25: undefined reference to `MPI::Comm::Comm()' restraint_camshift2.o: In function `Intracomm': /home/users/didymos/openmpi-1.6.3/include/openmpi/ompi/mpi/cxx/intracomm_inln.h:23: undefined reference to `MPI::Comm::Comm()' restraint_camshift2.o: In function `Intracomm': /home/users/didymos/openmpi-1.6.3/include/openmpi/ompi/mpi/cxx/intracomm.h:25: undefined reference to `MPI::Comm::Comm()' /home/users/didymos/openmpi-1.6.3/include/openmpi/ompi/mpi/cxx/intracomm.h:25: undefined reference to `MPI::Comm::Comm()' restraint_camshift2.o:/home/users/didymos/openmpi-1.6.3/include/openmpi/ompi/mpi/cxx/intracomm.h:25: more undefined references to `MPI::Comm::Comm()' follow restraint_camshift2.o:(.data.rel.ro._ZTVN3MPI3WinE[_ZTVN3MPI3WinE]+0x48): undefined reference to `MPI::Win::Free()' restraint_camshift2.o:(.data.rel.ro._ZTVN3MPI8DatatypeE[_ZTVN3MPI8DatatypeE]+0x78): undefined reference to `MPI::Datatype::Free()' collect2: error: ld returned 1 exit status make[3]: *** [mdrun] Error 1 make[3]: Leaving directory `/home/users/didymos/src/gromacs-4.5.5/src/kernel' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/users/didymos/src/gromacs-4.5.5/src' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/users/didymos/src/gromacs-4.5.5/src' make: *** [all-recursive] Error 1 I am using gcc 4.7.3 Any ideas or suggestions? Thanks! Best, tomek
Re: [OMPI users] undefined reference to `MPI::Comm::Comm()
So I am running OpenMPi1.6.3 (config.log attached) And I would like to install gromacs patched with plumed (scientific computing). Both uses openmpi. Gromacs alone compiles without errors (openMPI works). But when patched I got one mentioned before. I am sending config file for patched gromacs. If you need any other file I would be happy to provide. Thanks a lot! Best, tomek config_gromacs.log.bz2 Description: BZip2 compressed data config_openmpi.log.bz2 Description: BZip2 compressed data
Re: [OMPI users] undefined reference to `MPI::Comm::Comm()
I used mpicc but when I switched in Makefile to mpic++ it compiled without errors. Thanks a lot! Best, tomek On Tue, Jul 9, 2013 at 2:31 PM, Jeff Squyres (jsquyres) wrote: > I don't see all the info requested from that web page, but it looks like OMPI > built the C++ bindings ok. > > Did you use mpic++ to build Gromacs? > > > On Jul 9, 2013, at 9:20 AM, Tomek Wlodarski wrote: > >> So I am running OpenMPi1.6.3 (config.log attached) >> And I would like to install gromacs patched with plumed (scientific >> computing). Both uses openmpi. >> Gromacs alone compiles without errors (openMPI works). But when >> patched I got one mentioned before. >> I am sending config file for patched gromacs. >> If you need any other file I would be happy to provide. >> Thanks a lot! >> Best, >> >> tomek >> ___ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > > -- > Jeff Squyres > jsquy...@cisco.com > For corporate legal information go to: > http://www.cisco.com/web/about/doing_business/legal/cri/ > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] undefined reference to `MPI::Comm::Comm()
Oh you are right. Thanks. Best tomek On Tue, Jul 9, 2013 at 2:44 PM, Jeff Squyres (jsquyres) wrote: > If you care, the issue is that it looks like Gromacs is using the MPI C++ > bindings. You therefore need to use the MPI C++ wrapper compiler, mpic++ > (vs. mpicc, which is the MPI C wrapper compiler). > > > On Jul 9, 2013, at 9:41 AM, Tomek Wlodarski wrote: > >> I used mpicc but when I switched in Makefile to mpic++ it compiled >> without errors. >> Thanks a lot! >> Best, >> >> tomek >> >> On Tue, Jul 9, 2013 at 2:31 PM, Jeff Squyres (jsquyres) >> wrote: >>> I don't see all the info requested from that web page, but it looks like >>> OMPI built the C++ bindings ok. >>> >>> Did you use mpic++ to build Gromacs? >>> >>> >>> On Jul 9, 2013, at 9:20 AM, Tomek Wlodarski >>> wrote: >>> >>>> So I am running OpenMPi1.6.3 (config.log attached) >>>> And I would like to install gromacs patched with plumed (scientific >>>> computing). Both uses openmpi. >>>> Gromacs alone compiles without errors (openMPI works). But when >>>> patched I got one mentioned before. >>>> I am sending config file for patched gromacs. >>>> If you need any other file I would be happy to provide. >>>> Thanks a lot! >>>> Best, >>>> >>>> tomek >>>> ___ >>>> users mailing list >>>> us...@open-mpi.org >>>> http://www.open-mpi.org/mailman/listinfo.cgi/users >>> >>> >>> -- >>> Jeff Squyres >>> jsquy...@cisco.com >>> For corporate legal information go to: >>> http://www.cisco.com/web/about/doing_business/legal/cri/ >>> >>> >>> ___ >>> users mailing list >>> us...@open-mpi.org >>> http://www.open-mpi.org/mailman/listinfo.cgi/users >> ___ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > > -- > Jeff Squyres > jsquy...@cisco.com > For corporate legal information go to: > http://www.cisco.com/web/about/doing_business/legal/cri/ > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users