varsha gautham wrote:
Hello justin,
Am sorry to say that its not useful.But what i mean is that the rtp
constructed manually is building up the polymer and generating topology
files and gro files.
But when i look into the gro file with vmd the connectivity between each
of the monomer is not
Hello justin,
Am sorry to say that its not useful.But what i mean is that the rtp
constructed manually is building up the polymer and generating topology
files and gro files.
But when i look into the gro file with vmd the connectivity between each of
the monomer is not built.That is my polymer co
It appears as if you were correct Berk. I will report on the results of my 24h test tomorrow, but I also set up another system
that used ld_seed=1993 and ran in 20 ps segments instead of the 200 ps segments that I was previously using. This system shows
signs of disaggregation on the 200 ps time-
It is one of those forehead slapping days again, d'oh
Previously on this sc did not have to use the mpi on the end, but now
you do. Problem solved, I think.
Yep
Log file now has nnodes: 2 and there is all the domain decomposition
details. That is more like it :)
Catch ya,
Dallas Warren
I get one line like the following for each core in an mpi job:
5723 ?RL 0:38 /work/cneale/exe/gromacs-4.0.2/exec/bin/mdrun_mpi
-deffnm ./md5_running/s6117B2_md5 -cpt 600
5724 ?RL 0:39 /work/cneale/exe/gromacs-4.0.2/exec/bin/mdrun_mpi
-deffnm ./md5_running/s6117B2_md5 -cpt
I must be missing something here. Now we don't need to use the -np
switch for grompp or mdrun, you simply specify with mpirun (4.0.3) eg
mpirun -np 4 mdrun -deffnm md_01
Doing that I end up with four separate processes. Does that mean that
the mdrun being used is not mpi enabled?
This is
Thank you Berk,
I will repeat my runs using the checkpoint file and report my findings back to
this list. Thank you for this advice.
Chris.
-- original message --
Hi,
In this manner you use the same random seed and thus noise for all parts.
In most cases this will not lead to serious artifac
Hi,
In this manner you use the same random seed and thus noise for all parts.
In most cases this will not lead to serious artifacts with SD,
but you can never be sure.
When checkpoints are used, you do not repeat random numbers.
This also gives a difference between serial and parallel in 4.0.
Wit
Quoting Jussi Lehtola :
On Wed, 2009-02-04 at 18:01 -0200, Alexandre Suman de Araujo wrote:
Quoting Jussi Lehtola :
> That's highly unlikely: it would be a severe performance bug, which
> would have been picked up by the kernel packager.
>
> How did you configure the parallel version? What MPI
Thank you Berk,
I will loon into tau_t=1.0 (or at least not = 0.1). Thank you for the hint.
These simulations run in 200 ps segments and utilize restarts via
grompp -t -e like this:
EXECUTING:
/hpf/projects1/pomes/cneale/exe/gromacs-4.0.3/exec/bin/grompp -f
/scratch/4772976.1.ompi-4-21.q
On Wed, 2009-02-04 at 18:01 -0200, Alexandre Suman de Araujo wrote:
> Quoting Jussi Lehtola :
> > That's highly unlikely: it would be a severe performance bug, which
> > would have been picked up by the kernel packager.
> >
> > How did you configure the parallel version? What MPI environment did yo
Hi,
SD will tau_t=0.1 will make your dynamics a lot slower.
I don't see a reason why there should be a difference between serial and
parallel.
Are all simulations single runs, or do you do restarts?
Did you compare the temperatures to check if there is no strong energy loss
or heating and if t
On Wed, 2009-02-04 at 15:59 -0200, Alexandre Suman de Araujo wrote:
> I think the problem is due to the use of the generic Ubuntu standard kernel.
> Maybe if I recompile the kernel I can get a better performance system.
> However,
> this is just a guess, and I would like to know if anyone already
Hello,
I have been experiencing problems with a detergent micelle falling
apart. This micelle spontaneously aggregated in tip4p and was stable
for >200 ns. I then took the .gro file from 100 ns after stable
micelle formation and began some free energy calculations, during
which the micell
Hi GMXers
Some days ago I reported poor performance of GROMACS parallel runnings over
Ubuntu 8.04 and software RAID in a quad-core processor.
(http://www.gromacs.org/pipermail/gmx-users/2009-January/039088.html)
In a attempt to made some tests in non-raid systems running Ubuntu 8.04, I
performed
Thanks Berk. I am only interested in these velocities because I am
trying to figure out why I am getting some differing results recently
(which I will post later).
I don't really care about the velocities output in the .gro this way,
my interest was only based on the fact that I thought thi
Ah yes, of course. I keep forgetting these strange units ;-)
m
On Wed, 2009-02-04 at 11:16 +0100, David van der Spoel wrote:
> Martyn Winn wrote:
> > The factor is 8 * pi^2
>
> The 100 comes from nm to Angstrom.
>
> > The factor 1/3 arises when you take the trace of an anisotropic
> > displacem
Hi,
First let me state again that these velocities are irrelevant for the dynamics,
because the velocities only enter the equation through momentum and kinetic
energy
which have a factor mass, which is zero for vsites.
Looking at the code there are two issues.
One is with continuation (unconst
Martyn Winn wrote:
The factor is 8 * pi^2
The 100 comes from nm to Angstrom.
The factor 1/3 arises when you take the trace of an anisotropic
displacement parameter.
But I believe g_rmsf will do this conversion anyway with options -oq or
-ox (I've not tested this, just read the documentation
For a start, the 'P 1 21 1' on the CRYST1 line is a different spacegroup
from 'P 21/b'. The former implies Z=2.
In principle, it should be straightforward to generate it yourself by
applying the 4 symmetry operators to your starting coordinates. You then
need to know whether 'P 21/b' is 'P 1 1 21
Dear All,
I tried to compile gromacs4.0 with mopac but got the following error:
-O3 -fomit-frame-pointer -finline-functions -Wall -Wno-unused
-funroll-all-loops -MT qm_mopac.lo -MD -MP -MF .deps/qm_mopac.Tpo -c
qm_mopac.c -o qm_mopac.o
qm_mopac.c:52:17: error: nsb.h: No such file or directory
mpi
The factor is 8 * pi^2
The factor 1/3 arises when you take the trace of an anisotropic
displacement parameter.
But I believe g_rmsf will do this conversion anyway with options -oq or
-ox (I've not tested this, just read the documentation)
Cheers
Martyn
On Wed, 2009-02-04 at 09:05 +0100, David v
David van der Spoel ha scritto:
Alessandro Casoni wrote:
Dear gmx-users,
i would like to generate a 3D plot of my potential energy/RMSD/radius
of gyration.
I used g_energy, g_rms and g_gyrate to collect informations on my
simulation..any suggestion on software able to generate 3D plot?
Plea
Alessandro Casoni wrote:
Dear gmx-users,
i would like to generate a 3D plot of my potential energy/RMSD/radius of
gyration.
I used g_energy, g_rms and g_gyrate to collect informations on my
simulation..any suggestion on software able to generate 3D plot?
Please check
Marvin Seibert, Alexand
Dear gmx-users,
i would like to generate a 3D plot of my potential energy/RMSD/radius of
gyration.
I used g_energy, g_rms and g_gyrate to collect informations on my
simulation..any suggestion on software able to generate 3D plot?
thanks
alessandro
özge kül wrote:
From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Alif M Latif
Sent: 02 May 2008 09:42
To: gmx-users@gromacs.org
Subject: [gmx-users] Plotting B-factor
Dear GROMACS users and developers,
I want to plot B-factor of a protein structure against residue. How can I do
that?. Us
Hi,
I always get confused by the atom names in TIP4P.
But MW is not a mass, but the vsite, right?
Velocities of vsites are completely irrelevant, except if you would want to
analyze them.
I might have changed something to the initial velocities of vsites for 4.0.
I would guess that 4.0 has the c
From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Alif M Latif
Sent: 02 May 2008 09:42
To: gmx-users@gromacs.org
Subject: [gmx-users] Plotting B-factor
Dear GROMACS users and developers,
I want to plot B-factor of a protein structure against residue. How can I do
that?. Using option -oq in
28 matches
Mail list logo