On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins <bdobb...@gmail.com> wrote:

>
> Hi guys,
>
> On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen <bro...@umich.edu> wrote:
>
>> Actually I had a much differnt results,
>>
>> gromacs-3.3.1  one node dual core dual socket opt2218  openmpi-1.2.7
>>  pgi/7.2
>> mpich2 gcc
>>
>
>    For some reason, the difference in minutes didn't come through, it
> seems, but I would guess that if it's a medium-large difference, then it has
> its roots in PGI7.2 vs. GCC rather than MPICH2 vs. OpenMPI.  Though, to be
> fair, I find GCC vs. PGI (for C code) is often a toss-up - one may beat the
> other handily on one code, and then lose just as badly on another.
>
> I think my install of mpich2 may be bad, I have never installed it before,
>>  only mpich1, OpenMPI and LAM. So take my mpich2 numbers with salt, Lots of
>> salt.
>
>
>   I think the biggest difference in performance with various MPICH2 install
> comes from differences in the 'channel' used..  I tend to make sure that I
> use the 'nemesis' channel, which may or may not be the default these days.
> If not, though, most people would probably want it.  I think it has issues
> with threading (or did ages ago?), but I seem to recall it being
> considerably faster than even the 'ssm' channel.
>
>   Sangamesh:  My advice to you would be to recompile Gromacs and specify,
> in the *Gromacs* compile / configure, to use the same CFLAGS you used with
> MPICH2.  Eg, "-O2 -m64", whatever.  If you do that, I bet the times between
> MPICH2 and OpenMPI will be pretty comparable for your benchmark case -
> especially when run on a single processor.
>

I reinstalled all softwares with -O3 optimization. Following are the
performance numbers for a 4 process job on a single node:

MPICH2:     26 m 54 s
OpenMPI:   24 m 39 s

More details:

$ /home/san/PERF_TEST/mpich2/bin/mpich2version
MPICH2 Version:         1.0.7
MPICH2 Release date:    Unknown, built on Mon Oct 13 18:02:13 IST 2008
MPICH2 Device:          ch3:sock
MPICH2 configure:       --prefix=/home/san/PERF_TEST/mpich2
MPICH2 CC:      /usr/bin/gcc -O3 -O2
MPICH2 CXX:     /usr/bin/g++  -O2
MPICH2 F77:     /usr/bin/gfortran -O3 -O2
MPICH2 F90:     /usr/bin/gfortran  -O2


$ /home/san/PERF_TEST/openmpi/bin/ompi_info
                Open MPI: 1.2.7
   Open MPI SVN revision: r19401
                Open RTE: 1.2.7
   Open RTE SVN revision: r19401
                    OPAL: 1.2.7
       OPAL SVN revision: r19401
                  Prefix: /home/san/PERF_TEST/openmpi
 Configured architecture: x86_64-unknown-linux-gnu
           Configured by: san
           Configured on: Mon Oct 13 19:10:13 IST 2008
          Configure host: locuzcluster.org
                Built by: san
                Built on: Mon Oct 13 19:18:25 IST 2008
              Built host: locuzcluster.org
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: yes
 Fortran90 bindings size: small
              C compiler: /usr/bin/gcc
     C compiler absolute: /usr/bin/gcc
            C++ compiler: /usr/bin/g++
   C++ compiler absolute: /usr/bin/g++
      Fortran77 compiler: /usr/bin/gfortran
  Fortran77 compiler abs: /usr/bin/gfortran
      Fortran90 compiler: /usr/bin/gfortran
  Fortran90 compiler abs: /usr/bin/gfortran
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: yes
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
   Heterogeneous support: yes
 mpirun default --prefix: no

Thanks,
Sangamesh

>
>   Cheers,
>   - Brian
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
MPICH2

[san@locuzcluster mpich2-1.0.7]$ ./configure --help

[san@locuzcluster mpich2-1.0.7]$ export CC=`which gcc`

[san@locuzcluster mpich2-1.0.7]$ export CXX=`which g++`

[san@locuzcluster mpich2-1.0.7]$ export F77=`which gfortarn`

[san@locuzcluster mpich2-1.0.7]$ export F90=`which gfortran`

[san@locuzcluster mpich2-1.0.7]$ export CFLAGS=-O3 

[san@locuzcluster mpich2-1.0.7]$ export FFLAGS=-O3  

[san@locuzcluster mpich2-1.0.7]$ ./configure 
--prefix=/home/san/PERF_TEST/mpich2 | tee config_out

[san@locuzcluster mpich2-1.0.7]$ make | tee make_out


OPENMPI

[san@locuzcluster openmpi-1.2.7]$ export FC=`which gfortran`

[san@locuzcluster openmpi-1.2.7]$ ./configure 
--prefix=/home/san/PERF_TEST/openmpi | tee config_out

[san@locuzcluster openmpi-1.2.7]$ make | tee make_out

[san@locuzcluster openmpi-1.2.7]$ make install | tee install_out




FFTW

$ export CC=`which gcc`

$ export CXX=`which g++`

$ export F77=`which gfortran`

$ export CFLAGS=-O3

$ export FFLAGS=-O3



GROMACS

With MPICH2

$ export CC=`which gcc`

$ export CXX=`which g++`

$ export F77=`which gfortran`

$ export CFLAGS="-I/home/san/PERF_TEST/fftw/include -O3"

$ export LDFLAGS="-L/home/san/PERF_TEST/fftw/lib"

$ export MPICC=/home/san/PERF_TEST/mpich2/bin/mpicc


With OPENMPI

$ export CC=`which gcc`

$ export CXX=`which g++`

$ export F77=`which gfortran`

$ export CFLAGS="-I/home/san/PERF_TEST/fftw/include -O3"

$ export LDFLAGS="-L/home/san/PERF_TEST/fftw/lib"

Reply via email to