Hello dear gmx-users!
So, I run md and got all files md.cpt, .trr, .gro, mdf.xvg,... etc.
then I did: grompp_d -f sp.mdp -c md.gro -n index.ndx -p topol.top -o sp.tpr
where sp.mdp differ from md.mdp in one line: energygrps = Protein SOL (in
md.mdp: energygrps = System)
after that I did: mdrun_d -s
2010/10/11 fancy2012
> Dear GMX users,
> There is another question. Is there some problems of gromacs-4.5.1 when
> compiling it using icc, while compiling fftw-3.2.2 using gcc? Or it is just
> OK?
>
Should in general be OK. One should compile fftw with --enable-sse.
Your other email doesn't incl
- Original Message -
From: fancy2012
Date: Tuesday, October 12, 2010 15:12
Subject: [gmx-users] problem of parallel run in gromacs-4.5.1
To: gmx-users
> Dear GMX users, > I did some MD in parallel using gromacs-4.5.1, but it
> failed to work! But it worked successfully when didn't use
- Original Message -
From: fancy2012
Date: Tuesday, October 12, 2010 15:16
Subject: [gmx-users] problems of compiling gromacs-4.5.1
To: gmx-users
> Dear GMX users, > There is another question. Is there some problems of
> gromacs-4.5.1 when compiling it using icc, while compiling fftw-
Dear GMX users,
There is another question. Is there some problems of gromacs-4.5.1 when
compiling it using icc, while compiling fftw-3.2.2 using gcc? Or it is just OK?
Thanks very much!
All the best,
fancy--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo
Dear GMX users,
I did some MD in parallel using gromacs-4.5.1, but it failed to work! But it
worked successfully when didn't use parallel version! It showed like this: Will
use 12 particle-particle and 4 PME only nodes
I searched the archive, and I found that someone also have this problem using
Now I am able to run simulations on GPU but the output is weird. For
example, temperature drops down to 270K while ref_t=298 (Tcoupl=andersen).
Moreover, after several hours of simulations mdrun-gpu starts to output
"NAN" energies and hangs up. Pre-run and post-run GPU memory test is always
pas
Vivek,
thank you for interesting results. What is the configuration of your
cluster: cpu, #cores per node, connection between nodes? It might be
informative to represent your results by the computational efficiency, i.e.
Performance/(number of cores). It would be indeed interesting to see the
- Original Message -
From: ms
Date: Tuesday, October 12, 2010 5:53
Subject: [gmx-users] Compiling with ICC: advantages and -if yes- a suitable
protocol? (it seems I can't)
To: Discussion list for GROMACS users
> Dear gmx users,
>
> I have heard (read: read on random blogs here and th
It is an interesting question. Is it not worthwhile to have a separate forum
board for such methodological issues?
Some time ago I raised the similar question regarding the convergence of
Self Diffusion Coefficient. The reasoning given in the paper mentioned by
Javier seems to be related to my i
Dear gmx users,
I have heard (read: read on random blogs here and there) that on Intel
compiling GROMACS with icc instead of gcc can bring up to 50%
performance improvement.
Since I always used gcc compiled GROMACS, I'd like to know:
- Is this true?
- If yes, can anybody help me in doing so?
Thx Roland,
with the help of -pp and -debug we found out, that we included the new
parameters in the wrong way. First bonded, and then nonbonded. It should
be the other way around.
Greets,
Christian
On Fri, 2010-10-08 at 11:09 -0400, Roland Schulz wrote:
> Hi,
>
>
> one possible reason could b
Hi,
those of you that are using the Fedora packages of Gromacs that I
maintain might be interested in knowing that I finally squeezed in the
time to rewrite the spec file for CMake and update the packages to
4.5.1.
The updated packages will be shortly available in updates-testing for
Fedora 12 -
shiva birgani wrote:
hi all dear
I have used dssp to examine secondary structure of a protein. it have
been done correctly but when I convert .xpm file to .eps the y-axis is
so short and the numbers of residues are not distinguishable and the
picture is not so clear !!! I wnat to know if the
hi all dear
I have used dssp to examine secondary structure of a protein. it have been
done correctly but when I convert .xpm file to .eps the y-axis is so short
and the numbers of residues are not distinguishable and the picture is not
so clear !!! I wnat to know if there exist a way to change it
Hello dear gmx-users!
So, I run md and got all files md.cpt, .trr, .gro, mdf.xvg,... etc.
then I did: grompp_d -f sp.mdp -c md.gro -n index.ndx -p topol.top -o sp.tpr
where sp.mdp differ from md.mdp in one line: energygrps = Protein SOL (in
md.mdp: energygrps = System)
after that I did: mdrun_d -s
On 11/10/10 15:45, Justin A. Lemkul wrote:
I am currently using GROMACS 4.0.7 with a custom force field I
developed (coarse-grained model I am developing). I want to jump to
4.5 , but I wonder if something in the syntax of force fields has
changed from 4.0.x to 4.5 :just to know in advance if I h
ms wrote:
Dear users,
I am currently using GROMACS 4.0.7 with a custom force field I developed
(coarse-grained model I am developing). I want to jump to 4.5 , but I
wonder if something in the syntax of force fields has changed from 4.0.x
to 4.5 :just to know in advance if I have to change t
Dear users,
I am currently using GROMACS 4.0.7 with a custom force field I developed
(coarse-grained model I am developing). I want to jump to 4.5 , but I
wonder if something in the syntax of force fields has changed from 4.0.x
to 4.5 :just to know in advance if I have to change the files or i
I wrote a Perl script to do a similar task (appended below). Perhaps it will be
useful to you. I hope it works; I had to hack out some things that were
specific to my needs and have done only limited testing.
-Justin
#!/usr/bin/perl
#
# plot_hbmap.pl - plot the probability of finding a
Hi Carla,
I did this using g_hbond by supplying an index file with only the 3 atoms
involved in the individual bond I was looking at.
I got this printed to screen:
"Average number of hbonds per timeframe 0.998 out of 1 possible"
Although it is time consuming to do it for more than a few hydroge
Hi everyone,
I tried to analyze the H-bonds in my trajectory with g-hbond and I analysed
the xpm and ndx file. But now I need to know the percentage of existence of
each hbond during my trajectory. Is there a way to do it with a command
line? Or is there a program (someone told me there are python
Sunita Patel wrote:
***
*Dear Mark,
On Mon, 11 Oct 2010 22:54:44 +1100, Mark Abraham wrote
> - Original Message -
> From: Sunita Patel
> Date: Monday, October 11, 2010 22:50
> Subject: [gmx-users] do_dssp failed to execute
> To: Discussion list for GROMACS users
>
> > Dear Us
Dear Mark,
On Mon, 11 Oct 2010 22:54:44 +1100, Mark Abraham wrote
> - Original Message -
> From: Sunita Patel
> Date: Monday, October 11, 2010 22:50
> Subject: [gmx-users] do_dssp failed to execute
> To: Discussion list for GROMACS users
>
> > Dear User,
> >
> > I set the path for dss
- Original Message -
From: Sunita Patel
Date: Monday, October 11, 2010 22:50
Subject: [gmx-users] do_dssp failed to execute
To: Discussion list for GROMACS users
> Dear User,
>
> I set the path for dssp executable in .bashrc file. Still
dssp in the path is probably not sufficient.
Dear User,
I set the path for dssp executable in .bashrc file. Still getting following
error. Could anybody suggest what would be the problem.
error message
Select a group: 5
Selected 5: 'MainChain'
There are 134 residues in your selected group
Opening lib
Hi Justin
I had pasted below what i got after executing make command. any help is
highly appreciated.
cc -O3 -fomit-frame-pointer -finline-functions -Wall -Wno-unused
-funroll-all-lo
ops -std=gnu99 -o grompp.exe grompp.o
-L/cygdrive/c/Packages/fftw/lib/CPPFlAGS=
-I/cygdrive/c/Packages/fftw/includ
vinothkumar mohanakrishnan wrote:
Its not clear to me what you are asking. if iam right the above message
is the configure command that i used during installation of gromacs.
The error message you posted from make is incomplete. There should be lines
immediately above the "collect2: Id re
On 2010-10-11 12.20, Mark Abraham wrote:
g_tune_pme in 4.5.2 is your friend here. Otherwise, stay at 48 or below,
probably.
And try it with 4.5! Much better performance.
Mark
- Original Message -
From: vivek sharma
Date: Monday, October 11, 2010 21:10
Subject: [gmx-users] Gromacs be
g_tune_pme in 4.5.2 is your friend here. Otherwise, stay at 48 or below,
probably.
Mark
- Original Message -
From: vivek sharma
Date: Monday, October 11, 2010 21:10
Subject: [gmx-users] Gromacs benchmarking results
To: Discussion list for GROMACS users
> Hi all,
> I have some gromacs
Hi all,
I have some gromacs benchmarking results to share.
I have tried the lysozyme example distributed in the benchmarking
suite "gmxbench-3.0.tar.gz". I have used the case provided in d.lzm/
with pme.mdp parameter file.
Following is the performance (in ns/day) I got with gromacs-4.0.5 on
my cl
31 matches
Mail list logo