[OMPI users] new tutorial books on MPI

2014-10-13 Thread Rajeev Thakur
Since many of you are interested in MPI, I wanted to bring to your attention 
two new tutorial books on MPI:

1. Using MPI: Portable Parallel Programming with the Message-Passing Interface, 
3rd edition
   by William Gropp, Ewing Lusk, and Anthony Skjellum
   This is an updated version of their earlier book, and covers basic MPI. 
   
http://www.amazon.com/Using-MPI-Programming-Message-Passing-Engineering/dp/0262527391/

2. Using Advanced MPI: Modern Features of the Message-Passing Interface
   by William Gropp, Torsten Hoefler, Rajeev Thakur, and Ewing Lusk
   This is a new book on advanced features of MPI, including the new features 
in MPI-3.
   
http://www.amazon.com/Using-Advanced-MPI-Message-Passing-Engineering/dp/0262527634/

Rajeev



Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-15 Thread Rajeev Thakur
For MPICH2 1.0.7, configure with --with-device=ch3:nemesis. That will use
shared memory within a node unlike ch3:sock which uses TCP. Nemesis is the
default in 1.1a1.

Rajeev


> Date: Wed, 15 Oct 2008 18:21:17 +0530
> From: "Sangamesh B" 
> Subject: Re: [OMPI users] Performance: MPICH2 vs OpenMPI
> To: "Open MPI Users" 
> Message-ID:
>   
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins 
>  wrote:
> 
> >
> > Hi guys,
> >
> > On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen 
>  wrote:
> >
> >> Actually I had a much differnt results,
> >>
> >> gromacs-3.3.1  one node dual core dual socket opt2218  
> openmpi-1.2.7
> >>  pgi/7.2
> >> mpich2 gcc
> >>
> >
> >For some reason, the difference in minutes didn't come 
> through, it
> > seems, but I would guess that if it's a medium-large 
> difference, then it has
> > its roots in PGI7.2 vs. GCC rather than MPICH2 vs. OpenMPI. 
>  Though, to be
> > fair, I find GCC vs. PGI (for C code) is often a toss-up - 
> one may beat the
> > other handily on one code, and then lose just as badly on another.
> >
> > I think my install of mpich2 may be bad, I have never 
> installed it before,
> >>  only mpich1, OpenMPI and LAM. So take my mpich2 numbers 
> with salt, Lots of
> >> salt.
> >
> >
> >   I think the biggest difference in performance with 
> various MPICH2 install
> > comes from differences in the 'channel' used..  I tend to 
> make sure that I
> > use the 'nemesis' channel, which may or may not be the 
> default these days.
> > If not, though, most people would probably want it.  I 
> think it has issues
> > with threading (or did ages ago?), but I seem to recall it being
> > considerably faster than even the 'ssm' channel.
> >
> >   Sangamesh:  My advice to you would be to recompile 
> Gromacs and specify,
> > in the *Gromacs* compile / configure, to use the same 
> CFLAGS you used with
> > MPICH2.  Eg, "-O2 -m64", whatever.  If you do that, I bet 
> the times between
> > MPICH2 and OpenMPI will be pretty comparable for your 
> benchmark case -
> > especially when run on a single processor.
> >
> 
> I reinstalled all softwares with -O3 optimization. Following are the
> performance numbers for a 4 process job on a single node:
> 
> MPICH2: 26 m 54 s
> OpenMPI:   24 m 39 s
> 
> More details:
> 
> $ /home/san/PERF_TEST/mpich2/bin/mpich2version
> MPICH2 Version: 1.0.7
> MPICH2 Release date:Unknown, built on Mon Oct 13 18:02:13 IST 2008
> MPICH2 Device:  ch3:sock
> MPICH2 configure:   --prefix=/home/san/PERF_TEST/mpich2
> MPICH2 CC:  /usr/bin/gcc -O3 -O2
> MPICH2 CXX: /usr/bin/g++  -O2
> MPICH2 F77: /usr/bin/gfortran -O3 -O2
> MPICH2 F90: /usr/bin/gfortran  -O2
> 
> 
> $ /home/san/PERF_TEST/openmpi/bin/ompi_info
> Open MPI: 1.2.7
>Open MPI SVN revision: r19401
> Open RTE: 1.2.7
>Open RTE SVN revision: r19401
> OPAL: 1.2.7
>OPAL SVN revision: r19401
>   Prefix: /home/san/PERF_TEST/openmpi
>  Configured architecture: x86_64-unknown-linux-gnu
>Configured by: san
>Configured on: Mon Oct 13 19:10:13 IST 2008
>   Configure host: locuzcluster.org
> Built by: san
> Built on: Mon Oct 13 19:18:25 IST 2008
>   Built host: locuzcluster.org
>   C bindings: yes
> C++ bindings: yes
>   Fortran77 bindings: yes (all)
>   Fortran90 bindings: yes
>  Fortran90 bindings size: small
>   C compiler: /usr/bin/gcc
>  C compiler absolute: /usr/bin/gcc
> C++ compiler: /usr/bin/g++
>C++ compiler absolute: /usr/bin/g++
>   Fortran77 compiler: /usr/bin/gfortran
>   Fortran77 compiler abs: /usr/bin/gfortran
>   Fortran90 compiler: /usr/bin/gfortran
>   Fortran90 compiler abs: /usr/bin/gfortran
>  C profiling: yes
>C++ profiling: yes
>  Fortran77 profiling: yes
>  Fortran90 profiling: yes
>   C++ exceptions: no
>   Thread support: posix (mpi: no, progress: no)
>   Internal debug support: no
>  MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>  libltdl support: yes
>Heterogeneous support: yes
>  mpirun default --prefix: no
> 
> Thanks,
> Sangamesh



Re: [OMPI users] on SEEK_*

2008-10-16 Thread Rajeev Thakur
In the upcoming 1.0.8 release of MPICH2 (next week or so) we are fixing it
similar to Open MPI, so you shouldn't need to undef anything even in MPICH2.

Rajeev


> Date: Thu, 16 Oct 2008 12:29:01 +0200
> From: Jed Brown 
> Subject: [OMPI users] on SEEK_*
> To: us...@open-mpi.org
> Message-ID: <20081016102901.gg10...@brakk.ethz.ch>
> Content-Type: text/plain; charset="utf-8"
> 
> I've just run into this chunk of code.
> 
> /* MPICH2 will fail if SEEK_* macros are defined
>  * because they are also C++ enums. Undefine them
>  * when including mpi.h and then redefine them
>  * for sanity.
>  */
> #  ifdef SEEK_SET
> #define MB_SEEK_SET SEEK_SET
> #define MB_SEEK_CUR SEEK_CUR
> #define MB_SEEK_END SEEK_END
> #undef SEEK_SET
> #undef SEEK_CUR
> #undef SEEK_END
> #  endif
> #include "mpi.h"
> #  ifdef MB_SEEK_SET
> #define SEEK_SET MB_SEEK_SET
> #define SEEK_CUR MB_SEEK_CUR
> #define SEEK_END MB_SEEK_END
> #undef MB_SEEK_SET
> #undef MB_SEEK_CUR
> #undef MB_SEEK_END
> #  endif
> 
> 
> MPICH2 (1.1.0a1) gives these errors if SEEK_* are present:
> 
> /opt/mpich2/include/mpicxx.h:26:2: error: #error "SEEK_SET is 
> #defined but must not be for the C++ binding of MPI"
> /opt/mpich2/include/mpicxx.h:30:2: error: #error "SEEK_CUR is 
> #defined but must not be for the C++ binding of MPI"
> /opt/mpich2/include/mpicxx.h:35:2: error: #error "SEEK_END is 
> #defined but must not be for the C++ binding of MPI"
> 
> but when SEEK_* is not present and iostream has been 
> included, OMPI-dev
> gives these errors.
> 
> /home/ompi/include/openmpi/ompi/mpi/cxx/mpicxx.h:53: error: 
> ?SEEK_SET? was not declared in this scope
> /home/ompi/include/openmpi/ompi/mpi/cxx/mpicxx.h:54: error: 
> ?SEEK_CUR? was not declared in this scope
> /home/ompi/include/openmpi/ompi/mpi/cxx/mpicxx.h:55: error: 
> ?SEEK_END? was not declared in this scope
> 
> There is a subtle difference between OMPI 1.2.7 and -dev at least with
> GCC 4.3.2.  If iostream was included before mpi.h and then SEEK_* are
> #undef'd then 1.2.7 succeeds while -dev fails with the message above.
> If stdio.h is included and SEEK_* are #undef'd then both OMPI versions
> fail.  MPICH2 requires in both cases that SEEK_* be #undef'd.
> 
> What do you recommend to remain portable?  Is this really an MPICH2
> issue?  The standard doesn't seem to address this issue.  The 
> MPICH2 FAQ
> has this
> 
> http://www.mcs.anl.gov/research/projects/mpich2/support/index.
> php?s=faqs#cxxseek
> 
> 
> Jed
> -- next part --
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 197 bytes
> Desc: not available
> URL: 
>  16/96b11669/attachment.bin>
> 
> --
> 
> Message: 4
> Date: Thu, 16 Oct 2008 07:43:54 -0400
> From: Jeff Squyres 
> Subject: Re: [OMPI users] on SEEK_*
> To: Open MPI Users 
> Message-ID: 
> Content-Type: text/plain; charset=WINDOWS-1252; format=flowed;
>   delsp=yes
> 
> On Oct 16, 2008, at 6:29 AM, Jed Brown wrote:
> 
> > but when SEEK_* is not present and iostream has been 
> included, OMPI- 
> > dev
> > gives these errors.
> >
> > /home/ompi/include/openmpi/ompi/mpi/cxx/mpicxx.h:53: error:  
> > ?SEEK_SET? was not declared in this scope
> > /home/ompi/include/openmpi/ompi/mpi/cxx/mpicxx.h:54: error:  
> > ?SEEK_CUR? was not declared in this scope
> > /home/ompi/include/openmpi/ompi/mpi/cxx/mpicxx.h:55: error:  
> > ?SEEK_END? was not declared in this scope
> >
> > There is a subtle difference between OMPI 1.2.7 and -dev at 
> least with
> > GCC 4.3.2.  If iostream was included before mpi.h and then 
> SEEK_* are
> > #undef'd then 1.2.7 succeeds while -dev fails with the 
> message above.
> > If stdio.h is included and SEEK_* are #undef'd then both 
> OMPI versions
> > fail.  MPICH2 requires in both cases that SEEK_* be #undef'd.
> 
> Open MPI doesn't require undef'ing of anything.  It should also not  
> require any special ordering of include files.  Specifically, the  
> following codes both compile fine for me with 1.2.8 and the OMPI SVN  
> trunk (which is what I assume you mean by "-dev"?):
> 
> #include 
> #include 
> int a = MPI::SEEK_SET;
> 
> and
> 
> #include 
> #include 
> int a = MPI::SEEK_SET;
> 
> So in short: don't #undef anything and OMPI should do the 
> Right things.
> 
> > What do you recommend to remain portable?  Is this really an MPICH2
> > issue?  The standard doesn't seem to address this issue.  
> The MPICH2  
> > FAQ
> > has this
> >
> > 
> http://www.mcs.anl.gov/research/projects/mpich2/support/index.
> php?s=faqs#cxxseek
> 
> 
> This is actually a problem in the MPI-2 spec; the names  
> "MPI::SEEK_SET" (and friends) were unfortunately chosen poorly.   
> Hopefully that'll be fixed relatively soon, in MPI-2.2.
> 
> MPICH chose to handle this situation a different way than we 
> did, and  
> apparently requires that you either #undef s