Good afternoon,
I'm using GROMACS version 4.0.5. My simulation system is a double
stranded DNA (51 nucleotides) in a water (TIPI3P type) box, defined as
0.9 A from the DNA, with 100 Na ions to neutralize the system. The
sequencial commands used were:
1) pdb2gmx -f dsDNA.pdb -p dsDNA.top -o dsDNA.gro -ffamber99
2) editconf -f dsDNA.gro -o box.gro -d 0.9
3) genbox -cp box.gro -cs ffamber_tip3p.gro -o water.gro -p dsDNA.top
4) grompp -f em.mdp -c water.gro -p dsDNA.top -o Premin.tpr
5) genion -s Premin.tpr -o water-ions.gro -np 100
6) grompp -f em.mdp -c water-ions.gro -p dsDNA.top -o minIons.tpr
7) MINIMIZATION: mdrun -v -s minIons.tpr -o minIons_traj.trr -x
minIons_traj.xtc -c minIons_final.gro -e minIons_ener.edr
The problem appears when I try to minimize the system. When I try to
parallellize the calculation, the calculation returns with an error
message as follows:
/Fatal error:
There is no domain decomposition for 8 nodes that is compatible with the
given box and a minimum cell size of 38.1122 nm
Change the number of nodes or mdrun option -rdd
Look in the log file for details on the domain decomposition/
According to your web page, this error appears when a very small system
is too small to run parallellized. But my system is extremelly big!! In
previous calcs I could parallellize smaller systems, even with just 8
nucleotides.
I attach you the error file and the log file, as well as the mdp.
I would be gratefull with any help. Thanks in advance,
NĂºria Alegret
University Rovira i Virgili - Tarragona (Spain)
NNODES=8, MYRANK=4, HOSTNAME=maginet-ii188
NNODES=8, MYRANK=2, HOSTNAME=maginet-ii188
NNODES=8, MYRANK=1, HOSTNAME=maginet-ii188
NNODES=8, MYRANK=6, HOSTNAME=maginet-ii188
NNODES=8, MYRANK=7, HOSTNAME=maginet-ii188
NNODES=8, MYRANK=0, HOSTNAME=maginet-ii188
NODEID=0 argc=12
:-) G R O M A C S (-:
NODEID=2 argc=12
NODEID=4 argc=12
NODEID=1 argc=12
NODEID=6 argc=12
NNODES=8, MYRANK=5, HOSTNAME=maginet-ii188
NODEID=5 argc=12
NNODES=8, MYRANK=3, HOSTNAME=maginet-ii188
NODEID=3 argc=12
NODEID=7 argc=12
Gromacs Runs On Most of All Computer Systems
:-) VERSION 4.0.5 (-:
Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
:-) /usr/local/gromacs-4.0.5-fftw3/bin/mdrun_mpi (-:
Option Filename Type Description
------------------------------------------------------------
-s minIons2.tpr Input Run input file: tpr tpb tpa
-o minIons_traj2.trr Output Full precision trajectory: trr trj cpt
-x minIons_traj2.xtc Output, Opt! Compressed trajectory (portable xdr
format)
-cpi state.cpt Input, Opt. Checkpoint file
-cpo state.cpt Output, Opt. Checkpoint file
-c minIons_final2.gro Output Structure file: gro g96 pdb
-e minIons_ener2.edr Output Energy file: edr ene
-g md.log Output Log file
-dgdl dgdl.xvg Output, Opt. xvgr/xmgr file
-field field.xvg Output, Opt. xvgr/xmgr file
-table table.xvg Input, Opt. xvgr/xmgr file
-tablep tablep.xvg Input, Opt. xvgr/xmgr file
-tableb table.xvg Input, Opt. xvgr/xmgr file
-rerun rerun.xtc Input, Opt. Trajectory: xtc trr trj gro g96 pdb cpt
-tpi tpi.xvg Output, Opt. xvgr/xmgr file
-tpid tpidist.xvg Output, Opt. xvgr/xmgr file
-ei sam.edi Input, Opt. ED sampling input
-eo sam.edo Output, Opt. ED sampling output
-j wham.gct Input, Opt. General coupling stuff
-jo bam.gct Output, Opt. General coupling stuff
-ffout gct.xvg Output, Opt. xvgr/xmgr file
-devout deviatie.xvg Output, Opt. xvgr/xmgr file
-runav runaver.xvg Output, Opt. xvgr/xmgr file
-px pullx.xvg Output, Opt. xvgr/xmgr file
-pf pullf.xvg Output, Opt. xvgr/xmgr file
-mtx nm.mtx Output, Opt. Hessian matrix
-dn dipole.ndx Output, Opt. Index file
Option Type Value Description
------------------------------------------------------
-[no]h bool no Print help info and quit
-nice int 0 Set the nicelevel
-deffnm string Set the default filename for all file options
-[no]xvgr bool yes Add specific codes (legends etc.) in the output
xvg files for the xmgrace program
-[no]pd bool no Use particle decompostion
-dd vector 0 0 0 Domain decomposition grid, 0 is optimize
-npme int -1 Number of separate nodes to be used for PME, -1
is guess
-ddorder enum interleave DD node order: interleave, pp_pme or cartesian
-[no]ddcheck bool yes Check for all bonded interactions with DD
-rdd real 0 The maximum distance for bonded interactions with
DD (nm), 0 is determine from initial coordinates
-rcon real 0 Maximum distance for P-LINCS (nm), 0 is estimate
-dlb enum auto Dynamic load balancing (with DD): auto, no or yes
-dds real 0.8 Minimum allowed dlb scaling of the DD cell size
-[no]sum bool yes Sum the energies at every step
-[no]v bool yes Be loud and noisy
-[no]compact bool yes Write a compact log file
-[no]seppot bool no Write separate V and dVdl terms for each
interaction type and node to the log file(s)
-pforce real -1 Print all forces larger than this (kJ/mol nm)
-[no]reprod bool no Try to avoid optimizations that affect binary
reproducibility
-cpt real 15 Checkpoint interval (minutes)
-[no]append bool no Append to previous output files when continuing
from checkpoint
-[no]addpart bool yes Add the simulation part number to all output
files when continuing from checkpoint
-maxh real -1 Terminate after 0.99 times this time (hours)
-multi int 0 Do multiple simulations in parallel
-replex int 0 Attempt replica exchange every # steps
-reseed int -1 Seed for replica exchange, -1 is generate a seed
-[no]glas bool no Do glass simulation with special long range
corrections
-[no]ionize bool no Do a simulation including the effect of an X-Ray
bombardment on your system
Back Off! I just backed up md.log to ./#md.log.14#
Getting Loaded...
Reading file minIons2.tpr, VERSION 4.0.5 (single precision)
Loaded with Money
-------------------------------------------------------
Program mdrun_mpi, VERSION 4.0.5
Source code file: domdec.c, line: 5873
Fatal error:
There is no domain decomposition for 8 nodes that is compatible with the given
box and a minimum cell size of 38.1122 nm
Change the number of nodes or mdrun option -rdd
Look in the log file for details on the domain decomposition
-------------------------------------------------------
"Baby, It Aint Over Till It's Over" (Lenny Kravitz)
Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 0 out of 8
gcq#285: "Baby, It Aint Over Till It's Over" (Lenny Kravitz)
--------------------------------------------------------------------------
mpirun has exited due to process rank 5 with PID 23201 on
node maginet-ii188 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
title = ssDNA minimization
cpp = /lib/cpp -traditional
define =
constraints = none
integrator = steep
dt = 0.002 ; ps !
nsteps = 500
nstlist = 10
ns_type = grid
rlist = 1.0
coulombtype = PME
rcoulomb = 1.0
rvdw = 1.4
fourierspacing = 0.12 ;
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 6 ;;
ewald_rtol = 1e-5
optimize_fft = yes;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;emtol = 1000.0
emstep = 0.01
Log file opened on Wed Jun 15 16:20:45 2011
Host: maginet-ii188 pid: 23196 nodeid: 0 nnodes: 8
The Gromacs distribution was built Wed Dec 16 14:39:44 CET 2009 by
ortiz@maginet-ii176 (Linux 2.6.18-128.el5 x86_64)
:-) G R O M A C S (-:
Gromacs Runs On Most of All Computer Systems
:-) VERSION 4.0.5 (-:
Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
:-) /usr/local/gromacs-4.0.5-fftw3/bin/mdrun_mpi (-:
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------
parameters of the run:
integrator = steep
nsteps = 500
init_step = 0
ns_type = Grid
nstlist = 10
ndelta = 2
nstcomm = 1
comm_mode = Linear
nstlog = 100
nstxout = 100
nstvout = 100
nstfout = 0
nstenergy = 100
nstxtcout = 0
init_t = 0
delta_t = 0.002
xtcprec = 1000
nkx = 560
nky = 780
nkz = 192
pme_order = 6
ewald_rtol = 1e-05
ewald_geometry = 0
epsilon_surface = 0
optimize_fft = TRUE
ePBC = xyz
bPeriodicMols = FALSE
bContinuation = FALSE
bShakeSOR = FALSE
etc = No
epc = No
epctype = Isotropic
tau_p = 1
ref_p (3x3):
ref_p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref_p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref_p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress (3x3):
compress[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
refcoord_scaling = No
posres_com (3):
posres_com[0]= 0.00000e+00
posres_com[1]= 0.00000e+00
posres_com[2]= 0.00000e+00
posres_comB (3):
posres_comB[0]= 0.00000e+00
posres_comB[1]= 0.00000e+00
posres_comB[2]= 0.00000e+00
andersen_seed = 815131
rlist = 1
rtpi = 0.05
coulombtype = PME
rcoulomb_switch = 0
rcoulomb = 1
vdwtype = Cut-off
rvdw_switch = 0
rvdw = 1.4
epsilon_r = 1
epsilon_rf = 1
tabext = 1
implicit_solvent = No
gb_algorithm = Still
gb_epsilon_solvent = 80
nstgbradii = 1
rgbradii = 2
gb_saltconc = 0
gb_obc_alpha = 1
gb_obc_beta = 0.8
gb_obc_gamma = 4.85
sa_surface_tension = 2.092
DispCorr = No
free_energy = no
init_lambda = 0
sc_alpha = 0
sc_power = 0
sc_sigma = 0.3
delta_lambda = 0
nwall = 0
wall_type = 9-3
wall_atomtype[0] = -1
wall_atomtype[1] = -1
wall_density[0] = 0
wall_density[1] = 0
wall_ewald_zfac = 3
pull = no
disre = No
disre_weighting = Conservative
disre_mixed = FALSE
dr_fc = 1000
dr_tau = 0
nstdisreout = 100
orires_fc = 0
orires_tau = 0
nstorireout = 100
dihre-fc = 1000
em_stepsize = 0.01
em_tol = 10
niter = 20
fc_stepsize = 0
nstcgsteep = 1000
nbfgscorr = 10
ConstAlg = Lincs
shake_tol = 0.0001
lincs_order = 4
lincs_warnangle = 30
lincs_iter = 1
bd_fric = 0
ld_seed = 1993
cos_accel = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
grpopts:
nrdf: 5.12534e+07
ref_t: 0
tau_t: 0
anneal: No
ann_npoints: 0
acc: 0 0 0
nfreeze: N N N
energygrp_flags[ 0]: 0
efield-x:
n = 0
efield-xt:
n = 0
efield-y:
n = 0
efield-yt:
n = 0
efield-z:
n = 0
efield-zt:
n = 0
bQMMM = FALSE
QMconstraints = 0
QMMMscheme = 0
scalefactor = 1
qm_opts:
ngQM = 0
--
gmx-users mailing list gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists