On 8/23/12 5:10 PM, Clare wrote:
Dear Vitaly,

here's the log file of the nvt simulation, and it was killed due to
exceeding the wall time. Hopefully it could still provide some useful
information. Thank you so much indeed!

                          :-)  G  R  O  M  A  C  S  (-:

                  Good ROcking Metal Altar for Chronical Sinners

                             :-)  VERSION 4.5.5  (-:

         Written by Emile Apol, Rossen Apostolov, Herman J.C. Berendsen,
       Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra,
         Gerrit Groenhof, Peter Kasson, Per Larsson, Pieter Meulenhoff,
            Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
                 Michael Shirts, Alfons Sijbers, Peter Tieleman,

                Berk Hess, David van der Spoel, and Erik Lindahl.

        Copyright (c) 1991-2000, University of Groningen, The Netherlands.
             Copyright (c) 2001-2010, The GROMACS development team at
         Uppsala University & The Royal Institute of Technology, Sweden.
             check out http://www.gromacs.org for more information.

          This program is free software; you can redistribute it and/or
           modify it under the terms of the GNU General Public License
          as published by the Free Software Foundation; either version 2
              of the License, or (at your option) any later version.

                               :-)  mdrun_mpi  (-:


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------

Input Parameters:
    integrator           = md
    nsteps               = 50000
    init_step            = 0
    ns_type              = Grid
    nstlist              = 5
    ndelta               = 2
    nstcomm              = 10
    comm_mode            = Linear
    nstlog               = 100
    nstxout              = 100
    nstvout              = 100
    nstfout              = 0
    nstcalcenergy        = 5
    nstenergy            = 100
    nstxtcout            = 0
    init_t               = 0
    delta_t              = 0.002
    xtcprec              = 1000
    nkx                  = 50
    nky                  = 50
    nkz                  = 100
    pme_order            = 4
    ewald_rtol           = 1e-05
    ewald_geometry       = 0
    epsilon_surface      = 0
    optimize_fft         = FALSE
    ePBC                 = xyz
    bPeriodicMols        = FALSE
    bContinuation        = FALSE
    bShakeSOR            = FALSE
    etc                  = V-rescale
    nsttcouple           = 5
    epc                  = No
    epctype              = Isotropic
    nstpcouple           = -1
    tau_p                = 1
    ref_p (3x3):
       ref_p[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
       ref_p[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
       ref_p[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
    compress (3x3):
       compress[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
       compress[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
       compress[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
    refcoord_scaling     = No
    posres_com (3):
       posres_com[0]= 0.00000e+00
       posres_com[1]= 0.00000e+00
       posres_com[2]= 0.00000e+00
    posres_comB (3):
       posres_comB[0]= 0.00000e+00
       posres_comB[1]= 0.00000e+00
       posres_comB[2]= 0.00000e+00
    andersen_seed        = 815131
    rlist                = 0.9
    rlistlong            = 0.9
    rtpi                 = 0.05
    coulombtype          = PME
    rcoulomb_switch      = 0
    rcoulomb             = 0.9
    vdwtype              = Cut-off
    rvdw_switch          = 0
    rvdw                 = 0.9
    epsilon_r            = 1
    epsilon_rf           = 1
    tabext               = 1
    implicit_solvent     = No
    gb_algorithm         = Still
    gb_epsilon_solvent   = 80
    nstgbradii           = 1
    rgbradii             = 1
    gb_saltconc          = 0
    gb_obc_alpha         = 1
    gb_obc_beta          = 0.8
    gb_obc_gamma         = 4.85
    gb_dielectric_offset = 0.009
    sa_algorithm         = Ace-approximation
    sa_surface_tension   = 2.05016
    DispCorr             = EnerPres
    free_energy          = no
    init_lambda          = 0
    delta_lambda         = 0
    n_foreign_lambda     = 0
    sc_alpha             = 0
    sc_power             = 0
    sc_sigma             = 0.3
    sc_sigma_min         = 0.3
    nstdhdl              = 10
    separate_dhdl_file   = yes
    dhdl_derivatives     = yes
    dh_hist_size         = 0
    dh_hist_spacing      = 0.1
    nwall                = 0
    wall_type            = 9-3
    wall_atomtype[0]     = -1
    wall_atomtype[1]     = -1
    wall_density[0]      = 0
    wall_density[1]      = 0
    wall_ewald_zfac      = 3
    pull                 = no
    disre                = No
    disre_weighting      = Conservative
    disre_mixed          = FALSE
    dr_fc                = 1000
    dr_tau               = 0
    nstdisreout          = 100
    orires_fc            = 0
    orires_tau           = 0
    nstorireout          = 100
    dihre-fc             = 1000
    em_stepsize          = 0.01
    em_tol               = 10
    niter                = 20
    fc_stepsize          = 0
    nstcgsteep           = 1000
    nbfgscorr            = 10
    ConstAlg             = Lincs
    shake_tol            = 0.0001
    lincs_order          = 4
    lincs_warnangle      = 30
    lincs_iter           = 1
    bd_fric              = 0
    ld_seed              = 1993
    cos_accel            = 0
    deform (3x3):
       deform[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
       deform[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
       deform[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
    userint1             = 0
    userint2             = 0
    userint3             = 0
    userint4             = 0
    userreal1            = 0
    userreal2            = 0
    userreal3            = 0
    userreal4            = 0
grpopts:
    nrdf:     13166.4     4039.82     51459.8
    ref_t:         300         300         300
    tau_t:         0.1         0.1         0.1
anneal:          No          No          No
ann_npoints:           0           0           0
    acc:            0           0           0
    nfreeze:           N           N           N
    energygrp_flags[  0]: 0
    efield-x:
       n = 0
    efield-xt:
       n = 0
    efield-y:
       n = 0
    efield-yt:
       n = 0
    efield-z:
       n = 0
    efield-zt:
       n = 0
    bQMMM                = FALSE
    QMconstraints        = 0
    QMMMscheme           = 0
    scalefactor          = 1
qm_opts:
    ngQM                 = 0
Initializing Domain Decomposition on 48 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
     two-body bonded interactions: 0.574 nm, LJ-14, atoms 661 682
   multi-body bonded interactions: 0.574 nm, Proper Dih., atoms 661 682
Minimum cell size due to bonded interactions: 0.631 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.771 nm
Estimated maximum distance required for P-LINCS: 0.771 nm
This distance will limit the DD cell size, you can override this with -rcon
Guess for relative PME load: 0.58
Using 0 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 48 cells with a minimum initial size of 0.964 nm
The maximum allowed number of cells is: X 8 Y 8 Z 16
Domain decomposition grid 6 x 4 x 2, separate PME nodes 0
PME domain decomposition: 6 x 8 x 1
Domain decomposition nodeid 0, coordinates 0 0 0

Using two step summing over 6 groups of on average 8.0 processes

Table routines are used for coulomb: TRUE
Table routines are used for vdw:     FALSE
Will do PME sum in reciprocal space.

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- --- Thank You --- -------- --------

Will do ordinary reciprocal space Ewald sum.
Using a Gaussian width (1/beta) of 0.288146 nm for Ewald
Cut-off's:   NS: 0.9   Coulomb: 0.9   LJ: 0.9
Long Range LJ corr.: <C6> 9.2537e-04
System total charge: 0.000
Generated table with 950 data points for Ewald.
Tabscale = 500 points/nm
Generated table with 950 data points for LJ6.
Tabscale = 500 points/nm
Generated table with 950 data points for LJ12.
Tabscale = 500 points/nm
Generated table with 950 data points for 1-4 COUL.
Tabscale = 500 points/nm
Generated table with 950 data points for 1-4 LJ6.
Tabscale = 500 points/nm
Generated table with 950 data points for 1-4 LJ12.
Tabscale = 500 points/nm

Enabling SPC-like water optimization for 8577 molecules.

Configuring nonbonded kernels...
Configuring standard C nonbonded kernels...
Testing x86_64 SSE2 support... present.


Removing pbc first time

Initializing Parallel LINear Constraint Solver

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess
P-LINCS: A Parallel Linear Constraint Solver for molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 116-122
-------- -------- --- Thank You --- -------- --------

The number of constraints is 7603
There are inter charge-group constraints,
will communicate selected coordinates each lincs iteration
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
-------- -------- --- Thank You --- -------- --------


Linking all bonded interactions to atoms
There are 19808 inter charge-group exclusions,
will use an extra communication step for exclusion forces for PME

The initial number of communication pulses is: X 1 Y 1 Z 1
The initial domain decomposition cell size is: X 1.33 nm Y 2.00 nm Z 8.00 nm

The maximum allowed distance for charge groups involved in interactions is:
                  non-bonded interactions           0.900 nm
             two-body bonded interactions  (-rdd)   0.900 nm
           multi-body bonded interactions  (-rdd)   0.900 nm
   atoms separated by up to 5 constraints  (-rcon)  1.333 nm

When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: X 1 Y 1 Z 1
The minimum size for domain decomposition cells is 0.900 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: X 0.67 Y 0.45 Z 0.11
The maximum allowed distance for charge groups involved in interactions is:
                  non-bonded interactions           0.900 nm
             two-body bonded interactions  (-rdd)   0.900 nm
           multi-body bonded interactions  (-rdd)   0.900 nm
   atoms separated by up to 5 constraints  (-rcon)  0.900 nm


Making 3D domain decomposition grid 6 x 4 x 2, home cell index 0 0 0
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
   0:  rest

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
-------- -------- --- Thank You --- -------- --------



The log file indicates the job never even really started. Either mdrun_mpi is defective, your MPI implementation in general does not work, or there was some other error at the system level that halted the job. There are no errors from Gromacs here, so the source of the problem is external to Gromacs. Try getting simple example MPI programs to work and discuss with your sysadmins.

-Justin

--
========================================

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

========================================
--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to