Re: [OMPI users] OpenMPI Documentation?
On 13/09/2007, Jeff Squyres wrote: > So there are at least a few people who are interested in this effort > (keep chiming in if you are interested so that we can get a tally of > who would like to be involved). A docs project for OpenMPI is a great idea. I can chime in some tuits (or at least patches to docs) or whatever. > - A few recent discussions about documentation came to the conclusion > that Docbook (www.docbook.org) looked promising, but we didn't get > deep into details / investigating the feasibility. One obvious Big > Project using Docbook is Subversion (see http://svnbook.red- > bean.com/). Docbook-produced HTML and PDF seem to look both pretty > and functional. Docbook gets my vote, especially when one considers the html and pdf output. I'd prefer LaTeX, but then I come from an academic background :-) Regards, Paul -- Dr. Paul Cochrane Regionales Rechenzentrum für Niedersachsen Gottfried Wilhelm Leibniz Universität Hannover Tel: (0511) 791-9085 Schlosswender Str. 5 Fax: (0511) 762-3003 30159 Hannover
Re: [OMPI users] OpenMPI Documentation?
> > Docbook gets my vote, especially when one considers the html and pdf > output. I'd prefer LaTeX, but then I come from an academic background > :-) I prefer LaTex too, the reason is same. Academic background :) and new found love in LaTex Regards, Amit -- Amit Kumar Saha [URL]:http://amitsaha.in.googlepages.com
Re: [OMPI users] Two different compilation of openmpi
Hi, Am 13.09.2007 um 23:29 schrieb Francesco Pietra: Is it possible to have two different compilations of openmpi on the same machine (dual-opterons, Debian Linux etch)? On that parallel computer sander.MPI (Amber9) and openmpi 1.2.3 have both been compiled with Intel Fortran 9.1.036. Now, I wish to install DOCK6 on this machine and I am advised that it should be better compiled on GNU compilers. As to openmpi I could install the Debian package, which is GNU compiled. Are conflicts between the two installation foreseeable? Although I don't have experience with DOCK, I suspect that certain procedures with DOCK call sander.MPI into play. I rule out the alternative of compiling Amber9 with GNU compilers, which will run slower. this is no problem. Instead of using any prebuilt package, compile and install the two different versions of OMPI on your own, and use two different locations for them, which you can achieve by e.g.: ./configure --prefix=/opt/my_location_a and a different location of course for the other compilation. If you now compile your application, be sure to get the correct one of mpicc etc. in /opt/my_location_a/bin and also use this specific mpiexec therein later on by adjusting the $PATH accordingly. As we have only two different versions, we don't use the mentioned "modules" package for now, but hardcode the appropriate PATH in the jobscript for our queuing system. --- Reuti
Re: [OMPI users] Two different compilation of openmpi
Also, you might want to use this configure option to simplify switching: --enable-mpirun-prefix-by-default For more details, see: ./configure --help On 9/14/07, Reuti wrote: > Hi, > > Am 13.09.2007 um 23:29 schrieb Francesco Pietra: > > > Is it possible to have two different compilations of openmpi on the > > same > > machine (dual-opterons, Debian Linux etch)? > > > > On that parallel computer sander.MPI (Amber9) and openmpi 1.2.3 > > have both been > > compiled with Intel Fortran 9.1.036. > > > > Now, I wish to install DOCK6 on this machine and I am advised that > > it should be > > better compiled on GNU compilers. As to openmpi I could install the > > Debian > > package, which is GNU compiled. Are conflicts between the two > > installation > > foreseeable? Although I don't have experience with DOCK, I suspect > > that certain > > procedures with DOCK call sander.MPI into play. > > > > I rule out the alternative of compiling Amber9 with GNU compilers, > > which will > > run slower. > > this is no problem. Instead of using any prebuilt package, compile > and install the two different versions of OMPI on your own, and use > two different locations for them, which you can achieve by e.g.: > > ./configure --prefix=/opt/my_location_a > > and a different location of course for the other compilation. If you > now compile your application, be sure to get the correct one of mpicc > etc. in /opt/my_location_a/bin and also use this specific mpiexec > therein later on by adjusting the $PATH accordingly. > > As we have only two different versions, we don't use the mentioned > "modules" package for now, but hardcode the appropriate PATH in the > jobscript for our queuing system. > > --- Reuti > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > -- Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/ tmat...@gmail.com || timat...@open-mpi.org I'm a bright... http://www.the-brights.net/
[OMPI users] Segmentation fault when spawning
Hi all, I get a segmentation fault when trying to spawn a single process on the localhost (127.0.0.1). I tried both the current stable 1.2.3 and the beta 1.2.4, both ended up the same way. >From the stack trace, i know it's the spawn call. Is it possible that there is an error with authentification? (I accepted the localhost certificates manually by opening up ssh-sessions.) [loud2:15472] *** Process received signal *** [loud2:15472] Signal: Segmentation fault (11) [loud2:15472] Signal code: Address not mapped (1) [loud2:15472] Failing at address: 0x2b7182ea7fe0 [loud2:15472] [ 0] /lib64/libpthread.so.0 [0x2b6983637c10] [loud2:15472] [ 1] /usr/local/lib/libopen-pal.so.0(_int_free+0x26d) [0x2b6982d75fdd] [loud2:15472] [ 2] /usr/local/lib/libopen-pal.so.0(free+0xbd) [0x2b6982d762fd] [loud2:15472] [ 3] /usr/local/lib/libopen-rte.so.0 [0x2b6982c33146] [loud2:15472] [ 4] /usr/local/lib/libmpi.so.0(ompi_comm_start_processes+0xe61) [0x2b6982a8a3a1] [loud2:15472] [ 5] /usr/local/lib/libmpi.so.0(PMPI_Comm_spawn+0x13a) [0x2b6982aaedfa] [loud2:15472] [ 6] queen(_ZNK3MPI9Intracomm5SpawnEPKcPS2_iRKNS_4InfoEi+0x5e) [0x41f64a] [loud2:15472] [ 7] queen(_ZN5blink5queen5Queen16startupLandscapeERKSsRSt4listINS0_4HostESaIS5_EE+0x9e2) [0x4222ae] [loud2:15472] [ 8] queen(main+0x936) [0x428c4c] [loud2:15472] [ 9] /lib64/libc.so.6(__libc_start_main+0xf4) [0x2b698375e154] [loud2:15472] [10] queen(__gxx_personality_v0+0xa9) [0x4183f9] [loud2:15472] *** End of error message *** All parameters are being checked for correctness, MPI::ARGV_NULL is used for argv. Is there a way to enable detailed logging, or are the mpirun - arguments all there is? (In the FAQ and /var/log/ i did not find logs.) Us there maybe a suggested solution to this problem, or do I have to debug OpenMPI with gdb now? Are there secret assumptions regarding the system this is running on? I had a version of the program running on another machine already (no changes to MPI related parts) ... Btw, I very much welcome the recent thoughts about establishing a documentation project. :) Thanks for any hint! Best regards Murat mkne@loud2:~/rep/DWA/queen> ompi_info -a Open MPI: 1.2.4b0 Open MPI SVN revision: r15441 Open RTE: 1.2.4b0 Open RTE SVN revision: r15441 OPAL: 1.2.4b0 OPAL SVN revision: r15441 MCA backtrace: execinfo (MCA v1.0, API v1.0, Component v1.2.4) MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.2.4) MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.2.4) MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.2.4) MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.2.4) MCA timer: linux (MCA v1.0, API v1.0, Component v1.2.4) MCA installdirs: env (MCA v1.0, API v1.0, Component v1.2.4) MCA installdirs: config (MCA v1.0, API v1.0, Component v1.2.4) MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0) MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0) MCA coll: basic (MCA v1.0, API v1.0, Component v1.2.4) MCA coll: self (MCA v1.0, API v1.0, Component v1.2.4) MCA coll: sm (MCA v1.0, API v1.0, Component v1.2.4) MCA coll: tuned (MCA v1.0, API v1.0, Component v1.2.4) MCA io: romio (MCA v1.0, API v1.0, Component v1.2.4) MCA mpool: rdma (MCA v1.0, API v1.0, Component v1.2.4) MCA mpool: sm (MCA v1.0, API v1.0, Component v1.2.4) MCA pml: cm (MCA v1.0, API v1.0, Component v1.2.4) MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.2.4) MCA bml: r2 (MCA v1.0, API v1.0, Component v1.2.4) MCA rcache: vma (MCA v1.0, API v1.0, Component v1.2.4) MCA btl: self (MCA v1.0, API v1.0.1, Component v1.2.4) MCA btl: sm (MCA v1.0, API v1.0.1, Component v1.2.4) MCA btl: tcp (MCA v1.0, API v1.0.1, Component v1.0) MCA topo: unity (MCA v1.0, API v1.0, Component v1.2.4) MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.2.4) MCA errmgr: hnp (MCA v1.0, API v1.3, Component v1.2.4) MCA errmgr: orted (MCA v1.0, API v1.3, Component v1.2.4) MCA errmgr: proxy (MCA v1.0, API v1.3, Component v1.2.4) MCA gpr: null (MCA v1.0, API v1.0, Component v1.2.4) MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.2.4) MCA gpr: replica (MCA v1.0, API v1.0, Component v1.2.4) MCA iof: proxy (MCA v1.0, API v1.0, Component v1.2.4) MCA iof: svc (MCA v1.0, API v1.0, Component v1.2.4) MCA ns: proxy (MCA v1.0, API v2.0, Component v1.2.4) MCA ns: replica (MCA v1.0, API v2.0, Component v1.2.4) MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0) MCA ras: dash_hos
Re: [OMPI users] OpenMPI Documentation?
Sorry, but LaTex is not a viable solution for open source community development of documentation. DocBook (XML) is one solution, another is DITA. I'm looking into some other open source communities that I'm familiar with to see what they're using. The general requirement for authoring/publishing tools would be something like: 1. tools themselves are open source 2. free 3. available on nearly all platforms (W/XP, Linux, MacOSX, Solaris, ...) 4. generates at least pdf, HTML, 5. includes source management/version control 6. provides some opportunity for localization We'll discuss this more once our docs email alias is established, which could be later today. Watch for an announcement from Jeff Squyres. Amit Kumar Saha wrote: Docbook gets my vote, especially when one considers the html and pdf output. I'd prefer LaTeX, but then I come from an academic background :-) I prefer LaTex too, the reason is same. Academic background :) and new found love in LaTex Regards, Amit <>
Re: [OMPI users] mpiio romio etc
On Fri, Sep 07, 2007 at 10:18:55AM -0400, Brock Palen wrote: > Is there a way to find out which ADIO options romio was built with? not easily. You can use 'nm' and look at the symbols :> > Also does OpenMPI's romio come with pvfs2 support included? What > about Luster or GPFS. OpenMPI has shipped with PVFS v2 support for a long time. Not sure how you enable it, though. --with-filesystems=ufs+nfs+pvfs2 might work for OpenMPI as it does for MPICH2. All versions of ROMIO support Lustre and GPFS the same way: with the "generic unix filesystem" (UFS) driver. Weikuan Yu at ORNL has been working on a native "AD_LUSTRE" driver and some improvements to ROMIO collective I/O. Likely to be in the next ROMIO release. For GPFS, the only optimized MPI-IO implementation is IBM's MPI for AIX. You're likely to see decent performance with the UFS driver, though. ==rob -- Rob Latham Mathematics and Computer Science DivisionA215 0178 EA2D B059 8CDF Argonne National Lab, IL USA B29D F333 664A 4280 315B
Re: [OMPI users] mpiio romio etc
Hi, To give FLAGS to the ROMIO configuration script, the configure option for Open MPI is: --with-io-romio-flags=FLAGS So something like: --with-io-romio-flags="--with-filesystems=ufs+nfs+pvfs2" should work, though I have not tested it. You can see all the ROMIO configure flags by running: ./ompi/mca/io/romio/romio/configure --help from the top directory of the Open MPI source. If you want to see what file systems support has been built for, you should just be able to look in the config.log for ROMIO: grep FILE_SYSTEM ./ompi/mca/io/romio/romio/config.log I am not an expert in this area, but I hope this helps. Tim Robert Latham wrote: On Fri, Sep 07, 2007 at 10:18:55AM -0400, Brock Palen wrote: Is there a way to find out which ADIO options romio was built with? not easily. You can use 'nm' and look at the symbols :> Also does OpenMPI's romio come with pvfs2 support included? What about Luster or GPFS. OpenMPI has shipped with PVFS v2 support for a long time. Not sure how you enable it, though. --with-filesystems=ufs+nfs+pvfs2 might work for OpenMPI as it does for MPICH2. All versions of ROMIO support Lustre and GPFS the same way: with the "generic unix filesystem" (UFS) driver. Weikuan Yu at ORNL has been working on a native "AD_LUSTRE" driver and some improvements to ROMIO collective I/O. Likely to be in the next ROMIO release. For GPFS, the only optimized MPI-IO implementation is IBM's MPI for AIX. You're likely to see decent performance with the UFS driver, though. ==rob
Re: [OMPI users] OpenMPI Documentation?
Ok, we're up and running. There is a new mailing list available: d...@open-mpi.org You must be subscribed in order to post. Subscribe here: http://www.open-mpi.org/mailman/listinfo.cgi/docs The web archives of this list on the www.open-mpi.org web site currently give a "permission denied" issue; that'll be fixed in the next [business] day or two. Anyone who is interested -- please go subscribe on the docs list and let's continue the conversation over there. On Sep 14, 2007, at 11:47 AM, Richard Friedman wrote: Sorry, but LaTex is not a viable solution for open source community development of documentation. DocBook (XML) is one solution, another is DITA. I'm looking into some other open source communities that I'm familiar with to see what they're using. The general requirement for authoring/publishing tools would be something like: 1. tools themselves are open source 2. free 3. available on nearly all platforms (W/XP, Linux, MacOSX, Solaris, ...) 4. generates at least pdf, HTML, 5. includes source management/version control 6. provides some opportunity for localization We'll discuss this more once our docs email alias is established, which could be later today. Watch for an announcement from Jeff Squyres. Amit Kumar Saha wrote: Docbook gets my vote, especially when one considers the html and pdf output. I'd prefer LaTeX, but then I come from an academic background :-) I prefer LaTex too, the reason is same. Academic background :) and new found love in LaTex Regards, Amit ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres Cisco Systems
Re: [OMPI users] mpiio romio etc
Sorry for not replying earlier -- going on vacation for a few days really puts you behind in e-mail. :-) I think that we should have a simple way to look up at least *something* about ROMIO in ompi_info. Perhaps we can easily snarf the flags provided to --with-io-romio-flags and put that in an MCA parameter that would then be query-able through ompi_info. Rob -- is there a public constant/symbol somewhere where we can access some form of ROMIO's version number? If so, we can also make that query-able via ompi_info. On Sep 14, 2007, at 1:53 PM, Tim Prins wrote: Hi, To give FLAGS to the ROMIO configuration script, the configure option for Open MPI is: --with-io-romio-flags=FLAGS So something like: --with-io-romio-flags="--with-filesystems=ufs+nfs+pvfs2" should work, though I have not tested it. You can see all the ROMIO configure flags by running: ./ompi/mca/io/romio/romio/configure --help from the top directory of the Open MPI source. If you want to see what file systems support has been built for, you should just be able to look in the config.log for ROMIO: grep FILE_SYSTEM ./ompi/mca/io/romio/romio/config.log I am not an expert in this area, but I hope this helps. Tim Robert Latham wrote: On Fri, Sep 07, 2007 at 10:18:55AM -0400, Brock Palen wrote: Is there a way to find out which ADIO options romio was built with? not easily. You can use 'nm' and look at the symbols :> Also does OpenMPI's romio come with pvfs2 support included? What about Luster or GPFS. OpenMPI has shipped with PVFS v2 support for a long time. Not sure how you enable it, though. --with-filesystems=ufs+nfs+pvfs2 might work for OpenMPI as it does for MPICH2. All versions of ROMIO support Lustre and GPFS the same way: with the "generic unix filesystem" (UFS) driver. Weikuan Yu at ORNL has been working on a native "AD_LUSTRE" driver and some improvements to ROMIO collective I/O. Likely to be in the next ROMIO release. For GPFS, the only optimized MPI-IO implementation is IBM's MPI for AIX. You're likely to see decent performance with the UFS driver, though. ==rob ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres Cisco Systems
Re: [OMPI users] mpiio romio etc
On Fri, Sep 14, 2007 at 02:16:46PM -0400, Jeff Squyres wrote: > Rob -- is there a public constant/symbol somewhere where we can > access some form of ROMIO's version number? If so, we can also make > that query-able via ompi_info. There really isn't. We used to have a VERSION variable in configure.in, but more often than not it would be out of date. When you sync with ROMIO, you could update a datestamp maybe? Just throwing out ideas. ==rob -- Rob Latham Mathematics and Computer Science DivisionA215 0178 EA2D B059 8CDF Argonne National Lab, IL USA B29D F333 664A 4280 315B
Re: [OMPI users] mpiio romio etc
On Sep 14, 2007, at 2:28 PM, Robert Latham wrote: Rob -- is there a public constant/symbol somewhere where we can access some form of ROMIO's version number? If so, we can also make that query-able via ompi_info. There really isn't. We used to have a VERSION variable in configure.in, but more often than not it would be out of date. When you sync with ROMIO, you could update a datestamp maybe? Just throwing out ideas. Ok. Maybe we'll just make a hard-coded string somewhere "ROMIO from MPICH2 vABC, on AA/BB/" or somesuch. That'll at least give some indication of what version you've got. -- Jeff Squyres Cisco Systems
Re: [OMPI users] mpiio romio etc
On Fri, Sep 14, 2007 at 02:31:51PM -0400, Jeff Squyres wrote: > Ok. Maybe we'll just make a hard-coded string somewhere "ROMIO from > MPICH2 vABC, on AA/BB/" or somesuch. That'll at least give some > indication of what version you've got. That sort-of reminds me: ROMIO (well, all of MPICH2) is going to move to SVN "one of these days". Once we've done that, you'll be able to sync up with both MPICH2 releases and our development branch. I think it wouldn't be a problem for us to tag ROMIO whenever you sync up with it. ==rob -- Rob Latham Mathematics and Computer Science DivisionA215 0178 EA2D B059 8CDF Argonne National Lab, IL USA B29D F333 664A 4280 315B
Re: [OMPI users] mpiio romio etc
On Sep 14, 2007, at 2:51 PM, Robert Latham wrote: Ok. Maybe we'll just make a hard-coded string somewhere "ROMIO from MPICH2 vABC, on AA/BB/" or somesuch. That'll at least give some indication of what version you've got. That sort-of reminds me: ROMIO (well, all of MPICH2) is going to move to SVN "one of these days". Once we've done that, you'll be able to sync up with both MPICH2 releases and our development branch. I think it wouldn't be a problem for us to tag ROMIO whenever you sync up with it. Coolio. Would you be ameniable to a few patches? I think we can remove all the file renaming stuff, but there are some other things that we did to make ROMIO integrate nicely into Open MPI (the biggest issue was configure and the build system, but I know that Brian has "some ideas" about that - I don't know what they are, though). Moving ROMIO to public SVN wont' solve many of the integration and logistics issues, but it would allow us to snarf patches directly from your SVN, which might make our continual-over-time integration a little easier. In a perfect world, it would be great to svn:external our romio directory to a particular tag/release in your SVN, but I think that's probably too much to hope for. I can't remember all the particulars offhand; we'd probably want to have a sit-down discussion with you, Brian, and me to figure this stuff out if you're interested. -- Jeff Squyres Cisco Systems