Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: I didn't do the install myself, so I'm not sure what options they used. What potentially could have gone wrong? Tons of things. Most directly, forgetting --enable-mpi during configuration, but then specifying the _mpi program suffix and continuing to compile mdrun. -

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: I tried using mpiexec -n 8 mdrun_mpi, but I still can't get the output to that it used anything but just 1 Node. Is there something I was supposed to specify in the grompp command? No, grompp has nothing to do with parallelization (at least, not since version 3.3.3).

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: In the log file, when gromacs specifies "Nodes," does it mean processors? Yes. For instance, on my dual-core workstation, the "nodes" are correctly reported as 2. -Justin On Fri, Jan 28, 2011 at 1:44 PM, Justin A. Lemkul > wrote: Denny

Re: [gmx-users] Slow Runs

2011-01-28 Thread Denny Frost
In the log file, when gromacs specifies "Nodes," does it mean processors? On Fri, Jan 28, 2011 at 1:44 PM, Justin A. Lemkul wrote: > > > Denny Frost wrote: > >> I'm leaning toward the possibility that it is actually only running 8 >> copies of the same job on different processors. My question i

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: I'm leaning toward the possibility that it is actually only running 8 copies of the same job on different processors. My question is how does gromacs4.5 know how many processors it has available to parallelize a job? Is it specified in grompp or does it just detect it?

Re: [gmx-users] Slow Runs

2011-01-28 Thread Denny Frost
I'm leaning toward the possibility that it is actually only running 8 copies of the same job on different processors. My question is how does gromacs4.5 know how many processors it has available to parallelize a job? Is it specified in grompp or does it just detect it? On Fri, Jan 28, 2011 at 1:

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: Here's my grompp command: grompp_d -nice 0 -v -f md.mdp -c ReadyForMD.gro -o md.tpr -p top.top and my mdrun command is this: time mpiexec mdrun_mpi -np 8 -cpt 3 -nice 0 -nt 1 -s $PBS_O_WORKDIR/md.tpr -o $PBS_O_WORKDIR/mdDone.trr -x $PBS_O_WORKDIR/mdDone.xtc -c $PBS_

Re: [gmx-users] Slow Runs

2011-01-28 Thread Erik Marklund
Aside the point, but your checkpionting issue might be that mdrun tries to write the file not in $PBS_O_WORKDIR, but somewere where you don't have more disk quota. Putting "cd $PBS_O_WORKDIR" into your run script may be a good solution. Erik Denny Frost skrev 2011-01-28 21.25: Here's my grom

Re: [gmx-users] Slow Runs

2011-01-28 Thread Denny Frost
Here's my grompp command: grompp_d -nice 0 -v -f md.mdp -c ReadyForMD.gro -o md.tpr -p top.top and my mdrun command is this: time mpiexec mdrun_mpi -np 8 -cpt 3 -nice 0 -nt 1 -s $PBS_O_WORKDIR/md.tpr -o $PBS_O_WORKDIR/mdDone.trr -x $PBS_O_WORKDIR/mdDone.xtc -c $PBS_O_WORKDIR/mdDone.gro -e $P

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: all 8 nodes are running at full capacity, though What is your mdrun command line? How did you compile it? What can happen is something went wrong during installation, so you think you have an MPI-enabled binary, but it is simply executing 8 copies of the same job. -J

Re: [gmx-users] Slow Runs

2011-01-28 Thread Denny Frost
all 8 nodes are running at full capacity, though On Fri, Jan 28, 2011 at 1:13 PM, Justin A. Lemkul wrote: > > > Denny Frost wrote: > >> Here's what I've got: >> >> M E G A - F L O P S A C C O U N T I N G >> >> RF=Reaction-Field FE=Free Energy SCFE=Soft-Core/Free Energy >> T=Tabulated

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: Here's what I've got: M E G A - F L O P S A C C O U N T I N G RF=Reaction-Field FE=Free Energy SCFE=Soft-Core/Free Energy T=TabulatedW3=SPC/TIP3pW4=TIP4p (single or pairs) NF=No Forces Computing: M-Number M-Fl

Re: [gmx-users] Slow Runs

2011-01-28 Thread Denny Frost
Here's what I've got: M E G A - F L O P S A C C O U N T I N G RF=Reaction-Field FE=Free Energy SCFE=Soft-Core/Free Energy T=TabulatedW3=SPC/TIP3pW4=TIP4p (single or pairs) NF=No Forces Computing: M-Number M-Flops % Flops --

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: gromacs 4.5.1 Ah, what I posted was from 4.0.7. I wonder why that sort of output was eliminated in 4.5; it's quite useful. Sorry for leading you astray on that. No matter, the end of the .log file will still contain statistics about what's eating up all your simulati

Re: [gmx-users] Slow Runs

2011-01-28 Thread Denny Frost
gromacs 4.5.1 On Fri, Jan 28, 2011 at 12:40 PM, Erik Marklund wrote: > PME is still an Ewald sum. > > Erik > > Denny Frost skrev 2011-01-28 20.38: > > I don't have any domain decomposition information like that in my log file. > That's worrisome. The only other information I could find about P

Re: [gmx-users] Slow Runs

2011-01-28 Thread Erik Marklund
PME is still an Ewald sum. Erik Denny Frost skrev 2011-01-28 20.38: I don't have any domain decomposition information like that in my log file. That's worrisome. The only other information I could find about PME and Ewald and this set of lines: Table routines are used for coulomb: TRUE Tab

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: I don't have any domain decomposition information like that in my log file. That's worrisome. The only other information I could find about PME and Ewald and this set of lines: What version of Gromacs is this? -Justin Table routines are used for coulomb: TRUE Table

Re: [gmx-users] Slow Runs

2011-01-28 Thread Denny Frost
I don't have any domain decomposition information like that in my log file. That's worrisome. The only other information I could find about PME and Ewald and this set of lines: Table routines are used for coulomb: TRUE Table routines are used for vdw: FALSE Will do PME sum in reciprocal spac

Re: [gmx-users] Slow Runs

2011-01-28 Thread Justin A. Lemkul
Denny Frost wrote: I just realized that that was a very old mdp file. Here is an mdp file from my most recent run as well as what I think are the domain decomposition statistics. mdp file: title = BMIM+PF6 cpp = /lib/cpp constraints = hbonds integrat

Re: [gmx-users] Slow Runs

2011-01-28 Thread Denny Frost
icinal Chemistry and Drug Action >> >>Monash Institute of Pharmaceutical Sciences, Monash University >>381 Royal Parade, Parkville VIC 3010 >>dallas.war...@monash.edu <mailto:dallas.war...@monash.edu> >> >> >>+61 3 9903 9304 >>--

Re: [gmx-users] Slow Runs

2011-01-27 Thread Jussi Lehtola
On Thu, 27 Jan 2011 19:17:18 -0500 Chris Neale wrote: > In addition, you're only updating your neighbourlist every 40 ps. Surely, this should be *femto*seconds. > If you're going to use a 4 fs timestep, I suggest that you use > nstlist=5. Also, you appear to not be using any constraints while y

[gmx-users] Slow Runs

2011-01-27 Thread Chris Neale
mx-users-bounces at gromacs.org <http://lists.gromacs.org/mailman/listinfo/gmx-users>> />/ [mailto:gmx-users-bounces at gromacs.org <http://lists.gromacs.org/mailman/listinfo/gmx-users> />/ <mailto:gmx-users-bounces at gromacs.org <http://lists.gromacs.org/

Re: [gmx-users] Slow Runs

2011-01-27 Thread Justin A. Lemkul
macs.org>] *On Behalf Of *Denny Frost *Sent:* Friday, 28 January 2011 9:34 AM *To:* Discussion list for GROMACS users *Subject:* [gmx-users] Slow Runs I am taking over a project for a graduate student who did MD using Gromacs 3.3.3. I now run similar simulations with

Re: [gmx-users] Slow Runs

2011-01-27 Thread Denny Frost
omacs.org [mailto: > gmx-users-boun...@gromacs.org] *On Behalf Of *Denny Frost > *Sent:* Friday, 28 January 2011 9:34 AM > *To:* Discussion list for GROMACS users > *Subject:* [gmx-users] Slow Runs > > > > I am taking over a project for a graduate student who did MD using

RE: [gmx-users] Slow Runs

2011-01-27 Thread Dallas Warren
: Friday, 28 January 2011 9:34 AM To: Discussion list for GROMACS users Subject: [gmx-users] Slow Runs I am taking over a project for a graduate student who did MD using Gromacs 3.3.3. I now run similar simulations with Gromacs 4.5.1 and find that they run only about 1/2 to 1/3 as fast as the

[gmx-users] Slow Runs

2011-01-27 Thread Denny Frost
I am taking over a project for a graduate student who did MD using Gromacs 3.3.3. I now run similar simulations with Gromacs 4.5.1 and find that they run only about 1/2 to 1/3 as fast as the previous runs done in Gromacs 3.3.3. The runs have about the same number of atoms and both use opls force