Dear Chris,
Thank you for your message. I uploaded everything to the redmine. I will
let you know how the simulation with generated velocities went.
I asked the authors about any exemplary input that worked with tip5p and
oplsaa, but I did not get anything...
Best,
Grzegorz
On 2013-10-04 17:2
Dear Chris,
By now 7ns of the MD passed without a single warning.
Best Regards,
Grzegorz
P.s. The mdp:
constraints = none
integrator = md
dt = 0.001; ps
nsteps = 1000 ; total 10 ns
nstcomm = 1000
nstxout =
Dear Chris,
I put one tip5p molecule in a center of dodecahedral box - 2nm from that
molecule to walls, filled it with tip5p, ran 6000 steps of steep
minimization. After another 2704 steps of cg it converged to emtol 1.0.
I run 100k steps of nvt on this box afterwards
(http://shroom.ibb.waw.pl
Dear Chris,
I did not post the redmine issue yet, I want to check every possibility
beforehand. I will analyze trajectories more closely now.
Best,
Grzegorz
On 2013-09-29 18:47, Christopher Neale wrote:
Dear Grzegorz:
Under no conditions should any of the tip5p geometry change (for the
standa
the defined 0.7 from the
oxygen, right? I will keep you updated.
Best Regards,
Grzegorz
On 2013-09-29 04:50, Christopher Neale wrote:
Dear Gigo:
that's a good comprehensive testing and report. Please let us know
what you find out from those authors.
Their paper was short on methods (unl
Dear Chris,
I am really grateful for your help. This is what I did, with additional
LJ terms on LP1 and LP2 of tip5p:
- 5000 steps of steepest descent with positions restraints on protein
and flexible water (flexibility like in tip4p),
- 5000 steps of steep, no restraints, flexible water,
- 500
,
Grzegorz
On 2013-09-27 05:58, Christopher Neale wrote:
Dear Gigo:
I've never used tip5p, but perhaps you could add some LJ terms to the
opls_120 definition,
do your minimization, then remove the fake LJ term on opls_120 and run
your MD?
If that doesn't work, then you might be able to min
flexible molecule, e.g. define
= -DFLEXIBLE (or something). Check your water .itp file for how to do
it.
Mark
On Tue, Sep 24, 2013 at 10:25 PM, gigo wrote:
Dear GMXers,
Since I am interested in interactions of lone electron pairs of water
oxygen
within the active site of an enzyme that I work on
Dear GMXers,
Since I am interested in interactions of lone electron pairs of water
oxygen within the active site of an enzyme that I work on, I decided to
give TIP5P a shot. I use OPLSAA. I run into troubles very fast trying to
minimize freshly solvated system. I found on the gmx-users
(http:/
s just broken. Since
gromacs-4.6.2 behaved better than 4.6.3 there, I am coming back to it.
Best,
G
Mark
On Wed, Jul 17, 2013 at 6:30 PM, gigo wrote:
On 2013-07-13 11:10, Mark Abraham wrote:
On Sat, Jul 13, 2013 at 1:24 AM, gigo wrote:
On 2013-07-12 20:00, Mark Abraham wrote:
On Fri,
On 2013-07-13 11:10, Mark Abraham wrote:
On Sat, Jul 13, 2013 at 1:24 AM, gigo wrote:
On 2013-07-12 20:00, Mark Abraham wrote:
On Fri, Jul 12, 2013 at 4:27 PM, gigo wrote:
Hi!
On 2013-07-12 11:15, Mark Abraham wrote:
What does --loadbalance do?
It balances the total number of
On 2013-07-12 20:00, Mark Abraham wrote:
On Fri, Jul 12, 2013 at 4:27 PM, gigo wrote:
Hi!
On 2013-07-12 11:15, Mark Abraham wrote:
What does --loadbalance do?
It balances the total number of processes across all allocated nodes.
OK, but using it means you are hostage to its assumptions
Hi!
On 2013-07-12 07:58, Shine A wrote:
Hi Sir,
Is it possible to run an REMD simulation having 16 replicas
in a
cluster(group of cpu) having 8 nodes. Here each node have 8
processors.
It is possible. If you have Gromacs (version >= 4.6) compiled with MPI
and you specify the numbe
tion about nodes topology?
If you have any suggestions how to debug or trace this issue, I would
be glad to participate.
Best,
G
Mark
On Fri, Jul 12, 2013 at 3:46 AM, gigo wrote:
Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas
were
separate MPI jobs of c
Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were
separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4
cores with OpenMP. There is Torque installed on the cluster build of
12-cores nodes, so I used the following script:
#!/bin/tcsh -f
#PBS -S
Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were
separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4
cores with OpenMP. There is Torque installed on the cluster build of
12-cores nodes, so I used the following script:
#!/bin/tcsh -f
#PBS -S
Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were
separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4
cores with OpenMP. There is Torque installed on the cluster build of
12-cores nodes, so I used the following script:
#!/bin/tcsh -f
#PBS -S
Hi,
On the gromacs webpage in user contributions->topologies you have (at
least) 2 forcefields do download that allow you to simulate NA. The first
is OPLS NA records from rnp-group
(http://rnp-group.genebee.msu.su/3d/oplsa_ff.html). It is for gromacs
3.2.1, so minor manual adjustments for 3.3.
Hi,
I'm using openmpi on our 24-nodes 2 cores each cluster without any problem
so far. I run my jobs under torque and I did not change any of default
settings. With my system it scales rather well on 4 nodes, but I have no
problems with running more.
Grzegorz Wieczorek
Department of Bioinfor
19 matches
Mail list logo