On Thursday 06 September 2007 02:29, Jeff Squyres wrote:
> Unfortunately, <iostream> is there for a specific reason.  The  
> MPI::SEEK_* names are problematic because they clash with the  
> equivalent C constants.  With the tricks that we have to play to make  
> those constants [at least mostly] work in the MPI C++ namespace, we  
> *must* include them.  The comment in mpicxx.h explains:
> 
> // We need to include the header files that define SEEK_* or use them
> // in ways that require them to be #defines so that if the user
> // includes them later, the double inclusion logic in the headers will
> // prevent trouble from occuring.
> // include so that we can smash SEEK_* properly
> #include <stdio.h>
> // include because on Linux, there is one place that assumes SEEK_* is
> // a #define (it's used in an enum).
> #include <iostream>
> 
> Additionally, much of the C++ MPI bindings are implemented as inline  
> functions, meaning that, yes, it does add lots of extra code to be  
> compiled.  Sadly, that's the price we pay for optimization (the fact  
> that they're inlined allows the cost to be zero -- we used to have a  
> paper on the LAM/MPI web site showing specific performance numbers to  
> back up this claim, but I can't find it anymore :-\ [the OMPI C++  
> bindings were derived from the LAM/MPI C++ bindings]).
> 
> You have two options for speeding up C++ builds:
> 
> 1. Disable OMPI's MPI C++ bindings altogether with the --disable-mpi- 
> cxx configure flag.  This means that <mpi.h> won't include any of  
> those extra C++ header files at all.
> 
> 2. If you're not using the MPI-2 C++ bindings for the IO  
> functionality, you can disable the SEEK_* macros (and therefore  
> <stdio.h> and <iostream>) with the --disable-mpi-cxx-seek configure  
> flag.

maybe this could be a third option:

3. just add -DOMPI_SKIP_MPICXX to you compilation flags to skip the inclusion 
of the mpicxx.h.

-- Sven 

> See "./configure --help" for a full list of configure flags that are  
> available.
> 
> 
> 
> 
> On Sep 4, 2007, at 4:22 PM, Thompson, Aidan P. wrote:
> 
> > This is more a comment that a question. I think the compile-time  
> > required
> > for large applications that use Open MPI is unnecessarily long. The
> > situation could be greatly improved by streamlining the number of C+ 
> > + header
> > files that are included. Currently, compiling LAMMPS  
> > (lammps.sandia.gov)
> > takes 61 seconds to compile with a dummy MPI library and 262  
> > seconds with
> > Open MPI, a 4x slowdown.
> >
> > I noticed that iostream.h is included by mpicxx.h, for no good  
> > reason. To
> > measure the cost of this, I compiled the follow source file 1)  
> > without any
> > include files 2) with mpi.h 3) with iostream.h and 4) with both:
> >
> > $ more foo.cpp
> > #ifdef FOO_MPI
> > #include "mpi.h"
> > #endif
> >
> > #ifdef FOO_IO
> > #include <iostream>
> > #endif
> >
> > void foo() {};
> >
> > $ time mpic++ -c foo.cpp
> >         0.04 real         0.02 user         0.02 sys
> > $ time mpic++ -DFOO_MPI -c foo.cpp
> >         0.58 real         0.47 user         0.07 sys
> > $ time mpic++ -DFOO_IO -c foo.cpp
> >         0.30 real         0.23 user         0.05 sys
> > $ time mpic++ -DFOO_IO -DFOO_MPI -c foo.cpp
> >         0.56 real         0.47 user         0.07 sys
> >
> > Including mpi.h adds about 0.5 seconds to the compile time and  
> > iostream
> > accounts for about half of that. With optimization, the effect is even
> > greater. When you have hundreds of source files, that really adds up.
> >
> > How about cleaning up your include system?
> >
> > Aidan
> >
> >
> >
> >
> >
> > -- 
> >       Aidan P. Thompson
> >       01435 Multiscale Dynamic Materials Modeling
> >       Sandia National Laboratories
> >       PO Box 5800, MS 1322     Phone: 505-844-9702
> >       Albuquerque, NM 87185    FAX  : 505-845-7442
> >       mailto:atho...@sandia.gov
> >
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> Jeff Squyres
> Cisco Systems
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 

Reply via email to