Hi all,
I've been running 5fs timestep simulations successfully without gpus
(united-atom, HEAVYH). When continuing the same simulations on a gpu cluster
utilising the verlet cutoff-scheme they crash within 20 steps. Reducing the
timestep to 2fs runs smoothly, however I noticed the message:



Making this change manually led to crashing simulations as nstcalclr,
nsttcouple and nstpcouple default to the value of nstlist. After defining
them all separately I was able to determine that the simulation exploding
was dependent entirely on nstpcouple and by lowering it to 5 (from the
default 10) I was able to run simulations at a 5fs timestep.

So, my questions: Is lowering nstpcouple a legitimate solution or just a
bandaid?
The simulation runs with nstcalclr and nsttcouple set to 50 along with
nstlist. Is nstlist the only setting that should be increased when utilising
gpus?

Thanks in advance,
-Trayder

P.S. The working mdp file:







--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Effect-of-pressure-coupling-frequency-on-gpu-simulations-tp5008439.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to