On Wed, 2009-02-04 at 18:01 -0200, Alexandre Suman de Araujo wrote:
> Quoting Jussi Lehtola <jussi.leht...@helsinki.fi>:
> > That's highly unlikely: it would be a severe performance bug, which
> > would have been picked up by the kernel packager.
> >
> > How did you configure the parallel version? What MPI environment did you 
> > use?
> 
> First I used Ubuntu binary packages for Gromacs (3.3.2) and LAM-MPI. After I
> compiled both with Intel C and Fortran 11.0 compilers.
> 
> In both cases (binary and compiled), the performed is basicaly the same.
> 
> In further tests I compiled the Gromacs 4.0 and, again, the performance 
> was the
> same.

Please keep the discussion on the list.

Try compiling against OpenMPI (or using the binary compiled against it).
LAM has been deprecated for many years, and should not be used anymore.

Also, is your system big enough to allow efficient scaling?
-- 
------------------------------------------------------
Jussi Lehtola, FM, Tohtorikoulutettava
Fysiikan laitos, Helsingin Yliopisto
jussi.leht...@helsinki.fi, p. 191 50632
------------------------------------------------------
Mr. Jussi Lehtola, M. Sc., Doctoral Student
Department of Physics, University of Helsinki, Finland
jussi.leht...@helsinki.fi
------------------------------------------------------


_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to