Hi,
The -testverlet option is only for testing (as the name implies).
Please set the mdp option cutoff-scheme = Verlet
Also please update to 4.6.3, as this, potential, issue might have already been
resolved.
With the Verlet scheme the CPU and GPU should give the same, correct or
incorrect, resul
vious answers seem helpful in my case.
>
> I'm trying to reproduce the viscosity calculation of SPC water by Berk Hess
> (JCP 116, 2002) using cos_acceleration in Gromacs. The answer I get is 2
> orders of magnitude out.
>
> My topology file and parameter file is appended at the b
Hi,
Twin-range will lead to extra errors, which could be negligible or not.
But the errors should be the same and have the same effects in different
versions.
I think nothing has changed in the twin-range treatment from 4.5 to 4.6, but I
am not 100% sure.
Which version with twin-range matches y
Hi,
You're using thread-MPI, but you should compile with MPI. And then start as
many processes as total GPUs.
Cheers,
Berk
> From: chris.ne...@mail.utoronto.ca
> To: gmx-users@gromacs.org
> Date: Wed, 24 Apr 2013 17:08:28 +
> Subject: [gmx-users] How to use multiple nodes, each with 2 CPUs
Hi,
The PME settings you mention won't make any difference.
I don't see anything that can explain the differnce.
But are you sure that the difference is statistically relevant?
How did you determine sigma?
There could be long time correlations in your system.
Have you check the temperature? You
Hi,
I don't know enough about the details of Lustre to understand what's going on
exactly.
But I think mdrun can't do more then check the return value of fsync and
believe that the file is completely flushed to disk. Possibly Lustre does some
syncing, but doesn't actually flush the file physica
utoff-scheme = Verlet
and run grompp to get a new tpr file.
Cheers,
Berk
> From: iitd...@gmail.com
> Date: Thu, 28 Mar 2013 14:57:16 +0530
> Subject: Re: [gmx-users] no CUDA-capable device is detected
> To: gmx-users@gromacs.org
>
> On Thu, Mar 28, 2013 at 2:41 PM, Berk Hes
Hi,
The code compiled, so the compiler is not the issue.
I guess mdrun picked up GPU 0, which it should have ignored. You only want to
use GPU 1.
Could you try running:
mdrun -ntmpi 1 -gpu_id 1
Cheers,
berk
> Date: Thu, 28 Mar 2013 10:51:58 +0200
> Subject: Re: [gmx-users] no CUDA-capable d
Hi,
Gromacs calls fsync for every checkpoint file written:
fsync() transfers ("flushes") all modified in-core data of (i.e., modi-
fied buffer cache pages for) the file referred to by the file descrip-
tor fd to the disk device (or other permanent storage device) so that
> Date: Thu, 7 Mar 2013 14:54:39 +0200
> From: eyurt...@abo.fi
> To: gmx-users@gromacs.org
> Subject: RE: [gmx-users] Re: [gmx-developers] Gromacs 4.6.1 ATLAS/CUDA
> detection problems...
>
>
>
> On Thu, 7 Mar 2013, Berk Hess wrote:
>
> >
Hi,
Note that the linear algebra in Gromacs is limited to eigenvalue solving in two
analysis tools,
the MD engine does not use any (large) linear algebra.
So linear algebra library performance is irrelevant, except for the cases where
users
would want to repeated uses one of the two tools with
Hi,
There are two scripts make_gromos_rtp in the scripts directory which were used
to convert Gromos AA topologies to rtp entries.
Cheers,
Berk
> From: stephane.a...@cea.fr
> To: gmx-users@gromacs.org
> Date: Thu, 14 Feb 2013 11:34:59 +
> Subject:
Hi,
With tau_t=2 the sd1 integrator should be ok.
But in 4.6.1, which will be released this week, the performance issues with the
sd integrator are fixed.
Cheers,
Berk
> Date: Wed, 13 Feb 2013 09:19:46 +0300
> From: jmsstarli...@gmail.com
> To: gmx-users@gromacs.org
> Subject: [gmx-users] on
Hi,
You don't need SLI for multi-GPU simulations in Gromacs.
But you do need a real MPI library and configure Gromacs with -DGMX_MPI=on
Then you need to start Gromacs with (depending on you MPI library):
mpirun -np #GPUs mdrun
If anything, the performance of Gromacs should have gotten better fro
Hi,
All AMBER force fields in Gromacs which are also available in AMBER have been
validated against energies from the AMBER package.
Cheers,
Berk
> Date: Wed, 6 Feb 2013 13:48:13 +0200
> From: g...@bioacademy.gr
> To: gmx-users@gromacs.org
> Subject: [
ns/Cmake#MPI_build
> > with
> >
> > cmake -DGMX_MPI=ON ../gromacs-src
> > make -j 8
> >
> > I did not set anything else.
> >
> >
> > 2013/2/5 Roland Schulz
> >>
> >>> On Tue, Feb 5, 2013 at 8:58 AM, Berk Hess wrote
tual-dtor -Wall -Wno-unused
> -Wunused-value -Wno-unknown-pragmas -fomit-frame-pointer
> -funroll-all-loops -fexcess-precision=fast -O3 -DNDEBUG
>
> I will try your workaround, thanks!
>
> 2013/2/5 Berk Hess
>
> >
> > OK, then this is an unhandled case.
> > Stra
gt; address sizes : 36 bits physical, 48 bits virtual
> > > power management:
> > >
> > >
> > > It also does not work on the local cluster, the output in the .log file
> > is:
> > >
> > > Detecting CPU-specific acceleration.
> > > Prese
compile time: AVX_128_FMA
> Table routines are used for coulomb: FALSE
> Table routines are used for vdw: FALSE
>
> I am not too sure about the details for that setup, but the brand looks
> about right.
> Do you need any other information?
> Thanks for looking into it!
>
> 20
Hi,
This looks like our CPU detection code failed and the result is not handled
properly.
What hardware are you running on?
Could you mail the 10 lines from the md.log file following: "Detecting
CPU-specific acceleration."?
Cheers,
Berk
> Date: Tue,
Hi,
Performance in single precision on Tesla cards is often lower than on GTX cards.
The only reason to buy Telsa cards is double precision, reliability and fanless
cards for clusters.
I have a GTX 660 Ti, which is much cheaper than a GTX 670, but should have
nearly the same
performance in Gro
ameters as it
> is the forces that are "normally" switched off in both CHARMM and NAMD,
> although both options are available there. The difference I am told results
> in a smaller area/lipid for the bilayers.
>
> Best,
> Jesper
>
> On Jan 24, 2013, at 1:09
Hi,
I am not a fan of especially switching the potential, as that produces
artifacts.
Shifting is less harmful, but unnecessary in nearly all cases.
But I do realize some force fields require this (although you can probably find
a cut-off
setting at which results would be very similar).
Another
Hi,
The virial has no meaning per atom.
And you can't get the virial per atom out of Gromacs, it is never calculated in
that way (see the manual for details).
A special local pressure version of Gromacs exists, but even there you won't
get a virial per atom.
Cheers,
Berk
some
> > non-thread-parallel code is the reason and we hope to have a fix for 4.6.0.
> >
> > For updates, follow the issue #1211.
> >
> > Cheers,
> >
> > --
> > Szilárd
> >
> >
> > On Wed, Jan 16, 2013 at 4:45 PM, Berk Hess wrote:
> >
> &g
k
>
> On Wed, Jan 16, 2013 at 3:44 PM, Berk Hess wrote:
>
> >
> > Hi,
> >
> > Unfortunately this is not a bug, but a feature!
> > We made the non-bondeds so fast on the GPU that integration and
> > constraints take more time.
> > The sd1 integr
Hi,
Unfortunately this is not a bug, but a feature!
We made the non-bondeds so fast on the GPU that integration and constraints
take more time.
The sd1 integrator is almost as fast as the md integrator, but slightly less
accurate.
In most cases that's a good solution.
I closed the redmine issu
> Date: Thu, 29 Nov 2012 08:58:50 -0500
> From: jalem...@vt.edu
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] Electrostatic field in atoms - GROMACS is not
> reading table interaction functions for bonded interactions
>
>
>
> On 11/29/12 8:18 AM
Hi,
As the new Verlet cut-off scheme, also used on the GPU, uses separate non-bonded
coordinate and force buffers, implementing this will require very little code
changes.
All real variables in the nbnxn_atomdata_t data structure and in all code
operating
on them should be replaced by floats. W
x-users] נושא: RE: Discontinuity in first time-step velocities
> for diatomic molecule locally coupled to thermal drain
>
> tcoupl =v-rescale
> See grompp .mdp below
> Inon Sharony
> ינון שרוני
>
> "Berk Hess [via GROMACS]" כתב:
>
>
> Hi,
>
> You
Hi,
This is the same issue as in your other mail: ref_t should be > 0.
Cheers,
Berk
> Date: Tue, 3 Jul 2012 17:38:37 +0300
> From: inons...@tau.ac.il
> To: gmx-users@gromacs.org
> Subject: [gmx-users] Nose-Hoover does not couple
>
> I'm performing a ver
Hi,
You can't have ret_t=0 with Nose-Hoover.
We should add a check for this.
Cheers,
Berk
> Date: Thu, 28 Jun 2012 11:15:15 +0300
> From: inons...@tau.ac.il
> To: gmx-users@gromacs.org
> Subject: [gmx-users] Discontinuity in first time-step velocities for diatomic
> molecule locally coupled
Hi,
I swapped two arguments to a function call. I have fixed it and it should
appear soon
in the public repository. You can find the fix below.
Cheers,
Berk
diff --git a/src/kernel/topshake.c b/src/kernel/topshake.c
index c5f3957..78961c5 100644
--- a/src/kernel/topshake.c
+++ b/src/kernel/to
quot; but argument is of
> type "real *"
> make[3]: *** [gmx_bar.lo] Error 1
>
> Best,
> Tom
>
> On 03/20/2012 03:15 AM, gmx-users-requ...@gromacs.org wrote:
> > -- Message: 3 Date: Mon, 19 Mar 2012
> > 18:34:31 +0100 From: Berk Hess
Hi,
Yes, there is a problem with different temperature variables being single and
double precision.
Does the one line change below fix the problem?
Cheers,
Berk
-if ( ( *temp != barsim->temp) && (*temp > 0) )
+if ( !gmx_within_tol(temp,barsim->temp,GMX_FLOAT_EPS) && (*temp > 0) )
>
Hi all,
The NAIS Centre in Edinburgh is holding a Workshop on State-of-the-Art
Algorithms for Molecular Dynamics on May 2-4.
The days before this workshop, April 30-May 2, I and David Hardy from NAMD will
organize a tutorial on the use of GPUs and parallel computing. Here, among
other things,
Hi,
For systems or your size, they should run about equally fast.
Berk
> From: davidmcgiv...@gmail.com
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] 4 x Opteron 12-core or 4 x Xeon 8-core ?
> Date: Thu, 20 Jan 2011 14:39:14 +0100
>
> Dear Carsten,
>
> Thanks for the advice!
>
> The
romacs.org
Subject: [gmx-users] RE: Important: Bugs in NEMD calculation
Date: Wed, 19 Jan 2011 20:43:20 +0100
From: Berk Hess
Subject: RE: [gmx-users] Important: Bugs in NEMD calculation
To: Discussion list for GROMACS users
Message-ID:
Content-Type: text/plain; charset="iso-8859-1&q
> Date: Wed, 19 Jan 2011 20:52:13 +0100
> From: sp...@xray.bmc.uu.se
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] Important: Bugs in NEMD calculation
>
> On 2011-01-19 20.43, Berk Hess wrote:
> >
> >
> > > Date: Wed, 19 Jan 2011 19:13:12
> Date: Wed, 19 Jan 2011 19:13:12 +0100
> From: sp...@xray.bmc.uu.se
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] Important: Bugs in NEMD calculation
>
> On 2011-01-19 18.36, Xiaohu Li wrote:
> > Hi, All,
> > I've found a bug in the NEMD calculation for viscosity. This has
> > b
Yes.
The pow function is expensive though. The code will run much faster
if you can use rinvsix, such as check for 2*rinvsix > c6/c12.
(I forgot the factor 2 in my previous mail).
Berk
> From: makoto-yon...@aist.go.jp
> To: gmx-users@gromacs.org
> Date: Tue, 11 Jan 2011 10:10:56 +0900
> Subjec
> From: makoto-yon...@aist.go.jp
> To: gmx-users@gromacs.org
> Date: Mon, 10 Jan 2011 23:57:46 +0900
> Subject: [gmx-users] truncated LJ potential
>
> Dear David and Hess:
>
> Thanks a lot for quick replies.
>
> >> please look into gromacs' table potential functionality, it is described
> >>
> Date: Mon, 10 Jan 2011 14:04:34 +0100
> From: sp...@xray.bmc.uu.se
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] truncated LJ potential
>
> On 2011-01-10 13.39, Makoto Yoneya wrote:
> > Dear GROMACS experts:
> >
> > I'd like to use a modified Lennard-Jones potential (smoothly truncat
heers and many thanks for attention dear Berk.
Alan
From: Berk Hess
Subject: RE: [gmx-users] RE: [ atomtypes ] are not case sensitive?
Hi,
I don't know if any thinking went into the (non) case specifity of atom types.
Nearly all string comparisons in Gromacs are not case specific.
F
Hi,
I think the problem is epsilon_r=80, which reduces your electrostatics
interactions
by a factor of 80.
Berk
> Date: Mon, 13 Dec 2010 17:46:14 -0500
> From: jalem...@vt.edu
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] pathologically expanding box
>
>
>
> Greg Bowman wrote:
> >
file generated by the options
> mention on PC would work on a server with older version of Gromacs
> i.e. 4.0...
>
> Ram
>
> On Mon, Dec 13, 2010 at 6:04 PM, Berk Hess wrote:
> > Sorry, I mistook a million ps for a millisecond, this is a microsecond.
> > The
Sorry, I mistook a million ps for a millisecond, this is a microsecond.
The maximum number of steps in version 4.0 is INT_MAX, which is 2,147,483,647.
>From the name of your tpr file it seems you are not exceeding this,
so I don't know what's wrong exactly.
But for this reason (and many other rea
Hi,
I fixed the bug for 4.5.4.
If you want an unlimited number of steps, use:
tpbconv -nsteps -1
Berk
From: g...@hotmail.com
To: gmx-users@gromacs.org
Subject: RE: [gmx-users] Re: tpbconv extension
Date: Mon, 13 Dec 2010 17:44:53 +0100
Hi,
No this is actually a bug in tpbconv.
-nsteps
Hi,
No this is actually a bug in tpbconv.
-nsteps will not work, because that uses a normal int, not a 64-bit integer.
-dt should work, but on line 531 of src/kernel/tpbconv.c (int) should be
replaced by (gmx_large_int_t).
But are you sure you want to add a millisecond to your simulation time?
Hi,
Note that it is impossible to decompose a free energy (difference), which a PMF
is,
uniquely into different energy or force components.
A free energy is not simply the sum of energy terms, but is the result of how
different energy terms together affect the accessible phase space.
The effect
> Subject: Re: [gmx-users] Hard Spheres
> From: sascha.hem...@bci.tu-dortmund.de
> To: gmx-users@gromacs.org
> Date: Thu, 9 Dec 2010 08:56:54 +0100
>
> > On Wed, Dec 8, 2010 at 4:01 AM, Sascha Hempel
> > wrote:
> > Hi all!
> >
> > I am trying to add some hard spheres t
Hi,
There have been some issues with version 4.5.1 and energy files and
continuation.
But I don't recall them being this fatal.
Anyhow, it would be better to upgrade to version 4.5.3, which contains several
bug fixes.
Berk
From: mstu...@slb.com
To: gmx-users@gromacs.org
Date: Wed, 8 Dec 2010
Hi,
Unfortunately constraints can not be applied to virtual sites and grompp
apparently
does not not check for this. I will add a check.
Constraints between virtual sites can lead to very complex constraint equations
between the masses involved. Thus the general case if difficult to implement.
Y
Hi,
Maybe it is not so clear from the topology table in the manual, but when you
supply
sigma and epsilon as non-bonded parameters, as for OPLS, the nonbond_params
section also expects sigma and epsilon.
So it seems you could not set only C6 to zero.
However there is an, undocumented, trick: if
Hi,
The integrator is not the issue. The cut-off setup is.
You should use interactions that go smoothly to zero with a buffer region in
the neighborlist.
Use for instance shift for LJ interactions. PME could be switched, but that is
a much smaller
contribution. Use rlist about 0.3 nm larger th
Hi,
Note that grompp prints the atomtype that causes the problems.
You should copy the last line of share/top/charmm27.ff/gb.itp and rename MCH3
to MCH3S.
I have fixed this for 4.5.3 and I have also made the grompp error messages
clearer.
Berk
From: zhn...@gmail.com
To: gmx-users@gromacs.org
I forgot to say that for the water models the rtp entries are only used to
recognize
the atoms. For the topology the itp files in the .ff dir are used.
Berk
From: g...@hotmail.com
To: gmx-users@gromacs.org
Subject: RE: [gmx-users] Single atom charge group implementation for
CHARMM27 in 4.5.
Hi,
I saw your message, but I wanted to discuss with others before answering.
I have not had a chance for that yet, but I can answer anyhow.
1. This is a mistake. The person converting the original Charmm files edited
his conversion
script, but the "special cases" can not be treated with this s
= 1.0
ewald_geometry= 3dc
nwall= 2
wall_type= 9-3
wall_r_linpot= 1
wall_atomtype = opls_966 opls_968
wall_density= 9-3 9-3
wall_ewald_zfac= 3
pbc= xy
fourierspacing = 0.18
Regards
Vinoth
On Tue, Nov 2, 2010 at 3:41 PM, Berk Hess wrote
Well, it seems that any double precision energy file can not be read by 4.5.1
or 4.5.2 code.
This is such a serious issue that we should bring out 4.5.3 within a week.
If you only need the box, g_traj can produce that.
If you need other energies you can use a 4.5.1 g_energy for the moment.
Berk
: kmvin...@gmail.com
To: gmx-users@gromacs.org
Hi Berk
Thank you once again. How can i use thick layers and what is the procedure? can
you explain it bit more on this?
Regards
Vinoth
On Tue, Nov 2, 2010 at 3:18 PM, Berk Hess wrote:
Hi,
I have not heard about such issues, but it might
t work for pbc xy. any
help is highly appreciated.
Regards
Vinoth
On Tue, Nov 2, 2010 at 1:37 PM, Berk Hess wrote:
Hi,
With wall_r_linpot your wall potential is linear from 1 nm downwards.
Since the LJ force is negative at 1 nm, your atoms are attracted to the walls.
But why not simp
Hi,
With wall_r_linpot your wall potential is linear from 1 nm downwards.
Since the LJ force is negative at 1 nm, your atoms are attracted to the walls.
But why not simply use two interfaces? You get double the sampling for free
and you do not have to bother with complicated wall setups.
Berk
Hi,
Another comment on your interaction settings.
You did not mail if you are using shift or switch for vdw.
But I guess that both probably don't match exactly what Charmm does.
Since the switching range is so long and this is where a large part
of the dispersion attraction acts, this might have
Hi,
You have very strange and complex cut-off settings in Gromacs.
What Charmm settings are you trying to mimic?
Berk
> Date: Thu, 21 Oct 2010 15:03:51 +0200
> From: jakobtorwei...@tuhh.de
> To: gmx-users@gromacs.org
> Subject: [gmx-users] CHARMM36 lipid bilayers
>
> Dear gmx-users,
>
> recen
Hi,
We haven't observed any problems running with threads over 24 core AMD nodes
(4x6 cores).
Berk
> From: ckut...@gwdg.de
> Date: Thu, 21 Oct 2010 12:03:00 +0200
> To: gmx-users@gromacs.org
> Subject: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs
>
> Hi,
>
> does anyone have experie
Hi,
You can simply use plain unix cat, that will be much faster and does not use
memory.
If you want the time of the frames right, run trjconv with the right option on
the catted
frames.
Berk
> Date: Wed, 13 Oct 2010 08:41:17 +0200
> From: sp...@xray.bmc.uu.se
> To: gmx-users@gromacs.org
> Su
Hi,
Have you checked that your numbers are converged?
Such differences can easily occur when you don't sample enough.
You will have to try very many orientations before you hit the right one
which satisfies all hydrogen bonds.
I tested PME a long time ago. Then it worked correctly.
But it could
.com
> To: gmx-users@gromacs.org
>
> Hi,
>
> thanks for fixing this, Berk. Silly me, I did not include RESOLVED
> bugs in my original search and therefore missed it.
>
> Ondrej
>
>
> On Thu, Oct 7, 2010 at 12:38, Berk Hess wrote:
> > Hi,
> >
Hi,
It's bugzilla 579:
http://bugzilla.gromacs.org/show_bug.cgi?id=579
Actually the title is wrong, the pressure was correct, the pressure scaling,
and thus the density, not.
Berk
> Date: Thu, 7 Oct 2010 12:28:15 +0200
> Subject: Re: [gmx-users] Re: Problem with pressure coupling
> From: ondrej
> Date: Wed, 6 Oct 2010 11:53:56 +0200
> Subject: Re: [gmx-users] charmm tip5p in Gmx 4.5.2
> From: szilard.p...@cbr.su.se
> To: gmx-users@gromacs.org
>
> Hi,
>
> > Does anyone have an idea about what time the Gmx 4.5.2 will be released?
>
> Soon, if everything goes well in a matter of days.
Hi,
We had issues with 4.1 series compilers, but we are not longer sure if there
really
are issues in 4.1.2 and 4.1.3. I would surely avoid 4.1.0 and 4.1.1.
But we found that one or two issues in Gromacs only showed up with 4.1.2 and
4.1.3,
but there were problems in Gromacs, not gcc.
So we are
Hi,
Units are listed in the large table in the topology section of the pdf manual.
(hint: the energy unit everywhere in Gromacs is always kJ/mol and length always
nm)
Berk
> Date: Tue, 5 Oct 2010 15:14:18 +0200
> From: apanczakiew...@gmail.com
> To: gmx-users@gromacs.org
> Subject: [gmx-users
I think you don't want to use the gmx2 force field.
It does say "DEPRECATED" without reason.
Berk
Date: Tue, 5 Oct 2010 21:13:46 +0800
From: kecy...@sina.com
To: gmx-users@gromacs.org
Subject: [gmx-users] Use the TIP4P water model in the gmx2 force field
Hello, I want to use the tip4p water m
Hi,
I think you are looking at the wrong issue.
Unless your concentration is ridiculously high, the dispersion heterogeneity
will be irrelevant.
Furthermore, at distances were the correction works, the distribution will be
close to homogenous.
But you do have an issue with dispersion correction
Hi,
The frame time has to be within 0.5*frame spacing of the wanted frame.
If you have checkpoint files, you can use those.
Berk
> Date: Mon, 4 Oct 2010 02:24:57 -0700
> From: floris_buel...@yahoo.com
> To: gmx-users@gromacs.org
> Subject: [gmx-users] last frame in trr
>
> Hi,
>
> I want to
Hi,
g_rdf -surf mol counts the number of atoms within a distance r from the surface,
i.e. the atoms which have a distance of less than r to the closest atom of the
molecule.
Since the surface can be complex and dynamic, normalization is difficult.
Berk
From: ch...@nus.edu.sg
To: gmx-users@grom
Hi,
You don't need MPI within one machine.
Gromacs 4.5 has a built in thread-mpi library that gets built automatically.
Berk
> Date: Tue, 28 Sep 2010 10:22:30 +0200
> From: domm...@icp.uni-stuttgart.de
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] MPI and dual-core laptop
>
> -BEG
Hi,
For pure water (no ions, no external electric field), reaction field does fine
even with a short cut-off, e.g. 0.9 nm
(use reaction-field-zero if you need good energy conservation).
In case you have long-range fields, use no-cutoff at all (all cut-off to 0 in
the mdp file)
when you system i
Hi,
Could you file a bugzilla?
And what do you mean with "-dlb on"?
on is not an option, the options are: auto, yes, no
Thanks,
Berk
Date: Mon, 20 Sep 2010 12:22:18 -0700
From: musta...@onid.orst.edu
To: gmx-users@gromacs.org
Subject: [gmx-users] Getting some interesting errors.
Hi,
I think you shouldn't enable fortran.
Berk
> From: f.affin...@cineca.it
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] error on compilation on BlueGene/P
> Date: Mon, 20 Sep 2010 17:50:48 +0200
>
> ... and if I add the --enable-bluegene flag :
>
> "../../../../../src/gmxlib/nonbon
Hi,
To avoid complex book keeping and parameter checking, you can only run with no
tabulated interactions
or all tabulated interactions.
But there are standard tables for LJ+Coulomb in the share/top directory.
I don't know what you mean with surface.
A uniform surface can be done with a tabula
Hi,
Yes.
Cubic spline interpolation means piecewise interpolation with third order
polynomials.
The terms says nothing about which derivatives are continuous.
Cubic spline interpolation is often used to interpolate between points where
only
the function value itself is provided and in that cas
Hi,
Of the "cheap" models I would think tip4p 2001 is the best.
This has been parametrized to reproduce the phase diagram of water
and does surprisingly well.
Berk
> From: vvcha...@gmail.com
> Date: Fri, 17 Sep 2010 16:01:50 -0400
> To: gmx-users@gromacs.org
> Subject: [gmx-users] force field t
increasing other cutoff's?
>
> Can anyone think of any other ways to do this? I have no idea if it be
> easy/possible to implement an option to use the CHARMM approach for
> treating charge groups (as I understand it when any atom is within the
> cutoff then the whole char
(as I understand it when any atom is within the
> cutoff then the whole charge group is included).
>
> Sorry for the fairly long message and thanks for any insights you can give.
>
> Tom
>
> Berk Hess wrote:
> > Hi,
> >
> > No, you should never change t
Hi,
The choice of charge is significant even with PME, because of the way
Gromacs makes the neighborlists.
As I mailed before, use the -nochargegrp option of pdb2gmx with the Charmm27 ff.
Berk
> Date: Fri, 17 Sep 2010 11:46:36 +0200
> From: qia...@gmail.com
> To: gmx-users@gromacs.org
> Subjec
Hi,
No, you should never change the charges in a force field!
Run pdb2gmx again with the -nochargegrp option.
That will make the size of all charge groups a single atom.
This will be done automatically in the 4.5.2 release which will be out soon.
Berk
Date: Fri, 17 Sep 2010 02:32:31 -0700
From
Hi,
For position restraints there is an mdp option refcoord_scaling that sets how
the reference positions are affected by pressure scaling.
Berk
> Date: Thu, 16 Sep 2010 21:15:38 -0400
> From: jalem...@vt.edu
> To: gmx-users@gromacs.org
> Subject: Re: [gmx-users] about NPT run
>
>
>
> zhongj
Hi,
Someone forgot to edit topsort.c when added a new dihedral type.
I fixed it for 4.5.2.
The fix is an addition of one line, see below.
Berk
diff --git a/src/gmxlib/topsort.c b/src/gmxlib/topsort.c
index fbf7aa4..128e57f 100644
--- a/src/gmxlib/topsort.c
+++ b/src/gmxlib/topsort.c
@@ -61,6 +6
Good.
I just calculated tip3p energies for a Charmm tip3p trajectory.
Standard tip3p gives 1.1 kJ/mol per water molecule higher energies.
Thus the LJ on H's gives significant extra attraction which increases the
density.
Berk
> Date: Tue, 14 Sep 2010 15:49:03 +0200
> Subject: RE: [gmx-users] R
Hi,
There were several issues which I fixed.
But my test file/system did not give problems with the buggy code.
Could you test it on your system (use release-4-5-patches)?
Thanks,
> Date: Tue, 14 Sep 2010 09:12:18 -0400
> From: jalem...@vt.edu
> To: gmx-users@gromacs.org
> Subject: [gmx-users]
Hi,
The choice of constraint algorithm is irrelevant for the results.
As I understood Charmm tip3p has less structure than standard tip3p
and it would not surprise me if the density is different.
But I have never closely looked at Charmm tip3p properties myself.
Berk
> Date: Tue, 14 Sep 2010 1
> Date: Tue, 14 Sep 2010 13:11:47 +0200
> Subject: RE: [gmx-users] Regular vs. CHARMM TIP3P water model
> From: nicolas.sa...@cermav.cnrs.fr
> To: gmx-users@gromacs.org
>
> >
> > Hi,
> >
> > I don't understand what exactly you want to reproduce.
> > Standard tip3p and Charmm tip3p are different
Hi,
My last comment was not correct. Sorry, too many numbers in my head.
Both your density numbers are too low, more than can be explained by LJ cut-off
settings.
How did you determine the density?
g_density does not give the correct number. The number in our paper for tip3p
(not Charmm tip3p)
Hi,
We did some checking.
The value of the density for tip3p reported in the Gromacs Charmm ff
implementation of 1001.7 is incorrect.
This should have been 985.7. The number of 1014.7 for Charmm tip3p is correct.
I would expect that the difference with your number is mainly due to the
shorter
Hi,
I don't understand what exactly you want to reproduce.
Standard tip3p and Charmm tip3p are different models, so the density does not
have to be identical.
The Gromacs Charmm FF implementation paper:
http://pubs.acs.org/doi/full/10.1021/ct900549r
gives 1002 for tip3p and 1015 for charmm tip3p
Hi,
There is no problem at all and you only got notices, not errors.
In Gromacs version 4.0 and before each frame in an energy file always contained
the energy averages
over the whole simulation up to the current step. For proper checkpointing we
therefore had to write
an energy energy file fra
Hi,
Gromacs 4.5 compiles with a built-in thread mpi library by default.
So you don't need --enable-mpi.
With built in thread-mpi mdrun has an option -nt with by default uses all
threads.
Berk
> From: lida...@gmail.com
> Date: Sat, 11 Sep 2010 05:19:57 -0400
> Subject: Re: [gmx-users] About par
Hi,
I guess you have nstlist=0.
Try the fix below.
Berk
diff --git a/src/kernel/md.c b/src/kernel/md.c
index dd92d51..225a4bb 100644
--- a/src/kernel/md.c
+++ b/src/kernel/md.c
@@ -1859,7 +1859,7 @@ double do_md(FILE *fplog,t_commrec *cr,int nfile,const
t_filenm fnm[],
* or at the l
1 - 100 of 1087 matches
Mail list logo