Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Carsten Kutzner
Hi,

as a workaround you could run with -noappend and later
concatenate the output files. Then you should have no
problems with locking.

Carsten


On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:

> Hi all,
> 
> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about 30% 
> slower than 4.5.3. So I really appreciate if anyone can help me with it!
> 
> best regards,
> Baofu Qiao
> 
> 
> 于 2010-11-25 20:17, Baofu Qiao 写道:
>> Hi all,
>> 
>> I got the error message when I am extending the simulation using the 
>> following command:
>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi pre.cpt 
>> -append 
>> 
>> The previous simuluation is succeeded. I wonder why pre.log is locked, and 
>> the strange warning of "Function not implemented"?
>> 
>> Any suggestion is appreciated!
>> 
>> *
>> Getting Loaded...
>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>> 
>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>> 
>> ---
>> Program mdrun, VERSION 4.5.3
>> Source code file: checkpoint.c, line: 1750
>> 
>> Fatal error:
>> Failed to lock: pre.log. Function not implemented.
>> For more information and tips for troubleshooting, please check the GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> ---
>> 
>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>> 
>> Error on node 0, will try to stop all the nodes
>> Halting parallel program mdrun on CPU 0 out of 64
>> 
>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>> 
>> --
>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> with errorcode -1.
>> 
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --
>> --
>> mpiexec has exited due to process rank 0 with PID 32758 on
>> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: gmx-users Digest, Vol 79, Issue 167

2010-11-26 Thread sa
Hi Tom.

> Hi,
>
> You show differences when using GROMACS 4.5.3 for one simulation with
> the pdb2gmx option -nochargegrp and one without this option (is this
> correct, or did you manually edit the topology to have the old charge
> groups?). This pdb2gmx option should have no effect in 4.5.3 as all the
> entries in the CHARMM27 .rtp are in individual charge groups.
>

I will try to be more clear here:

First step: The two top files were generated with different rtp files. In
the first case i used a rtp file where the charge groups for each AA were
distributed as in the CHARMM27 ff distribution. The top file were generated
the gromacs (4.0.5) and the following command

pdb2gmx_mpi -ignh -ff charmm27 -ter -f hMRP1_K-TM17_AcAm.pdb -o
hMRP1_K-TM17.gro -p hMRP1_K-TM17.top -missing

For the second top i used gromacs 4.5 with the argument -nochargegrp (with
one charge group assigned for atom) with the command

pdb2gmx_mpi -ignh -ff charmm27 -ter -f hMRP1_K-TM17_AcAm-H.pdb -o
hMRP1_K-TM17.gro -p hMRP1_K-TM17.top -missing -nochargegrp

Second Step: i used the latest version of gromacs 4.5.3 to generate the
different tpr files and performed the simulations with the two types of top
files. In case of the MD with the others versions of gromacs, the step two
was done with the cooresponding version of gromacs.


>
> If you didn't edit the 4.5.3 topology then either you are doing
> something else different between these simulations or what you think is
> a difference between simulations with and without this pdb2gmx option is
> just a coincidence and that your peptide is sometimes stable over this
> short 24 ns simulation and sometimes it unfolds, irrespective of the
> charge group option.
>

I have also performed a MD during 100 ns (with -chargegroup), and in this
case the peptide remains stable along the simulation time. I don't
understand why.  Moreover i dont think it is a coincidence since i observe
the same results whatever the gmx version and md  parameters used

>
> If you did edit the topology then are you sure that it is correct? Did
> you use 4.5.3 to get the topology and edit it by hand or did you
> generate this 'charge group' topology by some other method, such as
> another version of GROMACS? If you used the topology from 4.0.x then
> maybe there are issues here as I know CHARMM27 was not fully supported
> until 4.5.
>

I didn't edit the  top file  manually, i used only the pdb2gmx tool. However
by reading your message, i have take a look more deeper in te the top file
and i have found differences in some directives (for example in the pair
directive where there are more terms). =-O   To be sure that this difference
has no impact in the MD, i going to do other MD.

I will come back soon. Thanks you again

SA




>
> Cheers
>
> Tom
>
> On 25/11/10 22:42, sa wrote:
> > Dear All,
> >
> > In a previous message
> > (http://lists.gromacs.org/pipermail/gmx-users/2010-November/055839.html
> ),
> > I described the results obtained with MD performed with the CHARMM27
> > ff and the chargegrp "yes" and "no" options of a peptide in TIP3P
> > water simulated gromacs. Since these results puzzled me a lot, i would
> > like to share with you others results obtained from the gromacs
> > community advices to explain these results.
> >
> > In few words, the context of these simulations. One of my labmate did,
> > 8 months ago (march/april), several simulations of a peptide (25 AA)
> > with the CHARMM27 ff (and CMAP). The peptide is a transmembrane
> > segment (TM) and belongs to a large membrane protein. This TM segment
> > has an initial helical conformation. The simulations were performed in
> > a cubic box filled with app. 14000 TIP3P water (Jorgensen's model)
> > with 2 Cl ions. To construct the topology file of the system,
> > -chargegrp "yes" with pdb2gmx and the MD were done with the gromacs
> > 4.0.5. For some reasons, he had to left the lab, and my boss asked me
> > to continue his work. When I checked their results, i was very
> > intrigued by these MD results because he found that the peptide keep
> > along all the simulation time (100 ns) its initial helical
> > conformation. This results are not in agreement with circular
> > dichroism experiments which are shown that the same peptide in water
> > has no helix segment and is completely unfold. I am aware that the
> > simulation time is short compared to experiment time scale, however
> > since i haven't seen any unfolding events in this simulation, so I was
> > not very confident about these results.
> >
> > To explain this inconsistency, I have suspected that the error came
> > probably of the use of the default -chargegrp with CHARMM ff in these
> > simulations since i have read several recent threads about the charge
> > groups problems in the CHARMM ff implementation in gromacs. To examine
> > this hypothesis I have done two simulations with last gromacs version
> > (4.5.3) and two top files containing charge groups and no charge
> > groups for the peptide

Re: [gmx-users] Re: gmx-users Digest, Vol 79, Issue 167

2010-11-26 Thread Thomas Piggot
OK that makes things clearer. As I mentioned I would check that it is 
not an issue with using CHARMM from 4.0.5 as if I remember correctly the 
CHARMM forcefield was not fully supported until version 4.5 (and I think 
was removed from the GROMACS 4.0.7 release so people didn't use it). As 
I also mentioned someone else will know much more about this than me, 
for example what changed between the 4.0.5 version of the forcefield 
implementation and the 4.5 implementation. It may have been very little, 
but I do not know. As for you seeing the same with different GROMACS 
versions then I think this does not matter as you are using the same 
4.0.5 topology with the different versions and it could be this topology 
which is causing the issues.


The test to run to check if it is the charge groups which are causing 
this behavior is to run simulations (probably with repeats so you sure) 
using GROMACS 4.5.1 with and without the -nochargegrp option of pdb2gmx 
and everything else the same. As to why you are seeing this (if it is 
indeed something you can confirm) is another question and I am unsure, 
it seems fairly strange. Again I am sure someone else with more 
knowledge can comment on this.


Cheers

Tom

On 26/11/10 08:35, sa wrote:

Hi Tom.

Hi,

You show differences when using GROMACS 4.5.3 for one simulation with
the pdb2gmx option -nochargegrp and one without this option (is this
correct, or did you manually edit the topology to have the old charge
groups?). This pdb2gmx option should have no effect in 4.5.3 as
all the
entries in the CHARMM27 .rtp are in individual charge groups.


I will try to be more clear here:

First step: The two top files were generated with different rtp files. 
In the first case i used a rtp file where the charge groups for each 
AA were distributed as in the CHARMM27 ff distribution. The top file 
were generated the gromacs (4.0.5) and the following command


pdb2gmx_mpi -ignh -ff charmm27 -ter -f hMRP1_K-TM17_AcAm.pdb -o 
hMRP1_K-TM17.gro -p hMRP1_K-TM17.top -missing


For the second top i used gromacs 4.5 with the argument -nochargegrp 
(with one charge group assigned for atom) with the command


pdb2gmx_mpi -ignh -ff charmm27 -ter -f hMRP1_K-TM17_AcAm-H.pdb -o 
hMRP1_K-TM17.gro -p hMRP1_K-TM17.top -missing -nochargegrp


Second Step: i used the latest version of gromacs 4.5.3 to generate 
the different tpr files and performed the simulations with the two 
types of top files. In case of the MD with the others versions of 
gromacs, the step two was done with the cooresponding version of gromacs.



If you didn't edit the 4.5.3 topology then either you are doing
something else different between these simulations or what you
think is
a difference between simulations with and without this pdb2gmx
option is
just a coincidence and that your peptide is sometimes stable over this
short 24 ns simulation and sometimes it unfolds, irrespective of the
charge group option.

I have also performed a MD during 100 ns (with -chargegroup), and in 
this case the peptide remains stable along the simulation time. I 
don't understand why.  Moreover i dont think it is a coincidence since 
i observe the same results whatever the gmx version and md  parameters 
used



If you did edit the topology then are you sure that it is correct? Did
you use 4.5.3 to get the topology and edit it by hand or did you
generate this 'charge group' topology by some other method, such as
another version of GROMACS? If you used the topology from 4.0.x then
maybe there are issues here as I know CHARMM27 was not fully supported
until 4.5.


I didn't edit the  top file  manually, i used only the pdb2gmx tool. 
However by reading your message, i have take a look more deeper in te 
the top file and i have found differences in some directives (for 
example in the pair directive where there are more terms). =-O To be 
sure that this difference has no impact in the MD, i going to do other MD.


I will come back soon. Thanks you again

SA
 



Cheers

Tom

On 25/11/10 22:42, sa wrote:
> Dear All,
>
> In a previous message
>
(http://lists.gromacs.org/pipermail/gmx-users/2010-November/055839.html),
> I described the results obtained with MD performed with the CHARMM27
> ff and the chargegrp "yes" and "no" options of a peptide in TIP3P
> water simulated gromacs. Since these results puzzled me a lot, i
would
> like to share with you others results obtained from the gromacs
> community advices to explain these results.
>
> In few words, the context of these simulations. One of my
labmate did,
> 8 months ago (march/april), several simulations of a peptide (25 AA)
> with the CHARMM27 ff (and CMAP). The peptide is a transmembrane
> segment (TM) and belongs to a large membrane protein. This TM
segment
> has an initial helical conformation. The simulations were
   

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Baofu Qiao
Hi Carsten,

Thanks for your suggestion! But because my simulation will be run for
about 200ns, 10ns per day(24 hours is the maximum duration for one
single job on the Cluster I am using), which will generate about 20
trajectories!

Can anyone find the reason causing such error?

regards,
Baofu Qiao


On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> Hi,
>
> as a workaround you could run with -noappend and later
> concatenate the output files. Then you should have no
> problems with locking.
>
> Carsten
>
>
> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>
>   
>> Hi all,
>>
>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about 30% 
>> slower than 4.5.3. So I really appreciate if anyone can help me with it!
>>
>> best regards,
>> Baofu Qiao
>>
>>
>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>> 
>>> Hi all,
>>>
>>> I got the error message when I am extending the simulation using the 
>>> following command:
>>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi pre.cpt 
>>> -append 
>>>
>>> The previous simuluation is succeeded. I wonder why pre.log is locked, and 
>>> the strange warning of "Function not implemented"?
>>>
>>> Any suggestion is appreciated!
>>>
>>> *
>>> Getting Loaded...
>>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>>>
>>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>>>
>>> ---
>>> Program mdrun, VERSION 4.5.3
>>> Source code file: checkpoint.c, line: 1750
>>>
>>> Fatal error:
>>> Failed to lock: pre.log. Function not implemented.
>>> For more information and tips for troubleshooting, please check the GROMACS
>>> website at http://www.gromacs.org/Documentation/Errors
>>> ---
>>>
>>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> Error on node 0, will try to stop all the nodes
>>> Halting parallel program mdrun on CPU 0 out of 64
>>>
>>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> --
>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>> with errorcode -1.
>>>
>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>> You may or may not see output from other processes, depending on
>>> exactly when Open MPI kills them.
>>> --
>>> --
>>> mpiexec has exited due to process rank 0 with PID 32758 on
>>>
>>>   
>> -- 
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the 
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>
>
>
>
>   


-- 

 Dr. Baofu Qiao
 Institute for Computational Physics
 Universität Stuttgart
 Pfaffenwaldring 27
 70569 Stuttgart

 Tel: +49(0)711 68563607
 Fax: +49(0)711 68563658

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Free Energy Calculation: dVpot/dlambda is always zero

2010-11-26 Thread Anirban Ghosh
Hi ALL,

I am trying to run free energy calculation and for that in the md.mdp file I
am keeping the following option:

; Free energy control stuff
free_energy = yes
init_lambda = 0.0
delta_lambda= 0
sc_alpha=0.5
sc-power=1.0
sc-sigma= 0.3


But still I find that in my log file the values for dVpot/dlambda is always
coming to be zero.
What I am doing wrong?
Any suggestion is welcome. Thanks a lot in advance.


Regards,

Anirban
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Discrepancy between -chargegrp and -nochargegrp in simulations with CHARMM ff, Why ?

2010-11-26 Thread Francesco Oteri
To see if the problem is force-field related, you could try to run the 
same simulations using amber-ff.

If you will find the same results, probably is a software bug.

Maybe the bug has been introduced in the version 4 when the Domain 
Decomposition has been introduced.

You can check if it is a software problem using the 3.3.3 version.




Il 25/11/2010 23:42, sa ha scritto:

Dear All,

In a previous message 
(http://lists.gromacs.org/pipermail/gmx-users/2010-November/055839.html), 
I described the results obtained with MD performed with the CHARMM27 
ff and the chargegrp "yes" and "no" options of a peptide in TIP3P 
water simulated gromacs. Since these results puzzled me a lot, i would 
like to share with you others results obtained from the gromacs 
community advices to explain these results.


In few words, the context of these simulations. One of my labmate did, 
8 months ago (march/april), several simulations of a peptide (25 AA) 
with the CHARMM27 ff (and CMAP). The peptide is a transmembrane 
segment (TM) and belongs to a large membrane protein. This TM segment 
has an initial helical conformation. The simulations were performed in 
a cubic box filled with app. 14000 TIP3P water (Jorgensen's model) 
with 2 Cl ions. To construct the topology file of the system, 
-chargegrp "yes" with pdb2gmx and the MD were done with the gromacs 
4.0.5. For some reasons, he had to left the lab, and my boss asked me 
to continue his work. When I checked their results, i was very 
intrigued by these MD results because he found that the peptide keep 
along all the simulation time (100 ns) its initial helical 
conformation. This results are not in agreement with circular 
dichroism experiments which are shown that the same peptide in water 
has no helix segment and is completely unfold. I am aware that the 
simulation time is short compared to experiment time scale, however 
since i haven't seen any unfolding events in this simulation, so I was 
not very confident about these results.


To explain this inconsistency, I have suspected that the error came 
probably of the use of the default -chargegrp with CHARMM ff in these 
simulations since i have read several recent threads about the charge 
groups problems in the CHARMM ff implementation in gromacs. To examine 
this hypothesis I have done two simulations with last gromacs version 
(4.5.3) and two top files containing charge groups and no charge 
groups for the peptide residus. I used  the *same* initial pdb file, 
box size and simulations parameters. The two simulations were carried 
out during 24 ns in the NPT ensemble with the md.mdp parameters 
described below after energy minimisation, NVT and NPT equilibration 
steps.


constraints = all-bonds
integrator  = md
nsteps  = 1200   ; 24000ps ou 24ns
dt  = 0.002

nstlist = 10
nstcalcenergy   = 10
nstcomm = 10

continuation= no; Restarting after NPT
vdw-type= cut-off
rvdw= 1.0
rlist   = 0.9
coulombtype  = PME
rcoulomb = 0.9
fourierspacing   = 0.12
fourier_nx   = 0
fourier_ny   = 0
fourier_nz   = 0
pme_order= 4
ewald_rtol   = 1e-05
optimize_fft= yes

nstvout = 5
nstxout = 5
nstenergy   = 2
nstlog  = 5000  ; update log file every 10 ps
nstxtcout   = 1000 ; frequency to write coordinates to xtc 
trajectory every 2 ps


Tcoupl  = nose-hoover
tc-grps = Protein Non-Protein
tau-t   = 0.4 0.4
ref-t   = 298 298
; Pressure coupling is on
Pcoupl  = Parrinello-Rahman
pcoupltype  = isotropic
tau_p   = 3.0
compressibility = 4.5e-5
ref_p   = 1.0135
gen_vel = no

I found that with charge groups, the peptide remains in its initial 
helical conformation, whereas with no charge group, the peptide 
unfolds quickly and has a random coil conformation. I have shown these 
results to my boss but I was not able to explain why we observe these 
differences between the two simulations. Indeed since i use PME in the 
MD, chargegroup should not affect the dynamic results (correct ?) . He 
asked to do others simulations with different versions of gromacs to 
see if is not a bug with charge group implementation in gromacs. For 
testing i have done four others MD wit the *same* initial pdb file, 
box size and simulations parameters and with previous GMX version 
4.5.0 and pre 4.5.2 and with the two types of top files.


In addition since i have done simulations with non optimal parameter  
for the vdW et electrostatic, i have also tested the influence of the 
vdW et electrostatic MD parameters on the MD by performing two 
additional simulations with the parameters  used in the Bjelkmar et al 
. paper (J. Chem. Theory Comput. 2010, 6, 459–466)  (labeled GMX4.5.3 
CHARMM in the figures)


; Run parameters
integrator 

[gmx-users] Phosphorylated Serine in charmm

2010-11-26 Thread Yasmine Chebaro
Hello all,
I am using charmm ff in Gromacs 4.5.2, everything goes right with standard
proteins, but now
i want to run a simulation on a protein with a phosphorylated residue.
As mentionned in this post
http://www.mail-archive.com/gmx-users@gromacs.org/msg35532.html,
I changed the rtp and hbd to add a specific section for the phosphorylated
amino-acid, having checked
the parametres with charmm.
I still have the problem in pdb2gmx where it seems like he can't see the new
definition and gives me
the residue topology database error.
Is there another file where I have to specify the new amino-acids, I search
all the files in the charmm
directory in gromacs top, but I still can't find another place where
amino-acids are defined.
Thanks for you help
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] system is exploding

2010-11-26 Thread Olga Ivchenko
Dear gromacs users,

I am trying to run simulations for small molecules in water. Topology files
I created by my self for charm ff. When I am trying to start energy
minimization I got an error:


 Steepest Descents:

Tolerance (Fmax) = 1.0e+00

Number of steps = 1000


That's means my system is exploding. Please can you advice me on this, what
I need to check.

best,

Olga
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] system is exploding

2010-11-26 Thread Baofu Qiao

Have you run the energy minimization (or further simulation to optimize
the structure and test the FF) on the small molecule before you added it
into water?

On 11/26/2010 11:26 AM, Olga Ivchenko wrote:
> Dear gromacs users,
>
> I am trying to run simulations for small molecules in water. Topology files
> I created by my self for charm ff. When I am trying to start energy
> minimization I got an error:
>
>
>  Steepest Descents:
>
> Tolerance (Fmax) = 1.0e+00
>
> Number of steps = 1000
>
>
> That's means my system is exploding. Please can you advice me on this, what
> I need to check.
>
> best,
>
> Olga
>
>   


-- 

 Dr. Baofu Qiao
 Institute for Computational Physics
 Universität Stuttgart
 Pfaffenwaldring 27
 70569 Stuttgart

 Tel: +49(0)711 68563607
 Fax: +49(0)711 68563658

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] system is exploding

2010-11-26 Thread Olga Ivchenko
I tried today to run minimization in vacuum for my small molecules. This has
the same error.

2010/11/26 Baofu Qiao 

>
> Have you run the energy minimization (or further simulation to optimize
> the structure and test the FF) on the small molecule before you added it
> into water?
>
> On 11/26/2010 11:26 AM, Olga Ivchenko wrote:
> > Dear gromacs users,
> >
> > I am trying to run simulations for small molecules in water. Topology
> files
> > I created by my self for charm ff. When I am trying to start energy
> > minimization I got an error:
> >
> >
> >  Steepest Descents:
> >
> > Tolerance (Fmax) = 1.0e+00
> >
> > Number of steps = 1000
> >
> >
> > That's means my system is exploding. Please can you advice me on this,
> what
> > I need to check.
> >
> > best,
> >
> > Olga
> >
> >
>
>
> --
> 
>  Dr. Baofu Qiao
>  Institute for Computational Physics
>  Universität Stuttgart
>  Pfaffenwaldring 27
>  70569 Stuttgart
>
>  Tel: +49(0)711 68563607
>  Fax: +49(0)711 68563658
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] system is exploding

2010-11-26 Thread Baofu Qiao

If you are really sure about the topology, the problem is the initial
structure. Try to use PackMol to build it.

On 11/26/2010 11:42 AM, Olga Ivchenko wrote:
> I tried today to run minimization in vacuum for my small molecules. This has
> the same error.
>
> 2010/11/26 Baofu Qiao 
>
>   
>> Have you run the energy minimization (or further simulation to optimize
>> the structure and test the FF) on the small molecule before you added it
>> into water?
>>
>> On 11/26/2010 11:26 AM, Olga Ivchenko wrote:
>> 
>>> Dear gromacs users,
>>>
>>> I am trying to run simulations for small molecules in water. Topology
>>>   
>> files
>> 
>>> I created by my self for charm ff. When I am trying to start energy
>>> minimization I got an error:
>>>
>>>
>>>  Steepest Descents:
>>>
>>> Tolerance (Fmax) = 1.0e+00
>>>
>>> Number of steps = 1000
>>>
>>>
>>> That's means my system is exploding. Please can you advice me on this,
>>>   
>> what
>> 
>>> I need to check.
>>>
>>> best,
>>>
>>> Olga
>>>
>>>
>>>   
>>
>> --
>> 
>>  Dr. Baofu Qiao
>>  Institute for Computational Physics
>>  Universität Stuttgart
>>  Pfaffenwaldring 27
>>  70569 Stuttgart
>>
>>  Tel: +49(0)711 68563607
>>  Fax: +49(0)711 68563658
>>
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> 
>   


-- 

 Dr. Baofu Qiao
 Institute for Computational Physics
 Universität Stuttgart
 Pfaffenwaldring 27
 70569 Stuttgart

 Tel: +49(0)711 68563607
 Fax: +49(0)711 68563658

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] heme

2010-11-26 Thread shahid nayeem
Hi Erik
I am using ffG43a1 force field. It has heme topology. But in Cyt C. FE is
bonded to both NR of HIS and SD of MET. The parameter for the bond, FE SD,
 angles e.g. NR (HIS) FE SD(MET)  and dihedral  angle CH2(MET)  SD (MET) FE
NR(HIS) is missing in this force field. hence I am getting error while
running grompp. Please suggest me what should I do.
Shahid Nayeem

On Thu, Nov 25, 2010 at 3:12 AM, Erik Marklund  wrote:

> shahid nayeem skrev 2010-11-24 18.02:
>
>  Dear all
>> I am trying MD of cyt C containing heme. I am able to generate bonds with
>> specbond.dat by pdb2gmx. After using editconf and genbox, when I tried
>> grompp I got error about unrecognized bonds/angles. I made bond with MET SD
>> and FE of Heme. As earlier suggested on this list I wrote to get parameter
>> for these bonds but I couldnt get it. If someone on this mailing list can
>> help me I will be grateful. Cyt C is very widely modelled protein with
>> Gomacs in literature hence I expect to get some help from the forum.
>> shahid nayeem
>>
> A long time ago I simulated CytC with one of the gromos force fields. It
> worked right out of the box. What forcefield are you using?
>
> --
> ---
> Erik Marklund, PhD student
> Dept. of Cell and Molecular Biology, Uppsala University.
> Husargatan 3, Box 596,75124 Uppsala, Sweden
> phone:+46 18 471 4537fax: +46 18 511 755
> er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3) Neither of 4.5.1, 4.5.2 and 4.5.3 works

2010-11-26 Thread Baofu Qiao
Hi all,

I just made some tests by using gmx 4.5.1, 4.5.2 and 4.5.3. Neither of
them works on the continuation.
---
Program mdrun, VERSION 4.5.1
Source code file: checkpoint.c, line: 1727

Fatal error:
Failed to lock: pre.log. Already running simulation?
---
Program mdrun, VERSION 4.5.2
Source code file: checkpoint.c, line: 1748

Fatal error:
Failed to lock: pre.log. Already running simulation?
---
Program mdrun, VERSION 4.5.3
Source code file: checkpoint.c, line: 1750

Fatal error:
Failed to lock: pre.log. Function not implemented.
=

The system to test is 895 SPC/E water,box size of 3nm, (genbox -box 3
-cs). The pre.mdp is attached.

I have tested two clusters:
Cluster A: 1)compiler/gnu/4.3 2) mpi/openmpi/1.2.8-gnu-4.3 3)FFTW 3.3.2
4) GMX 4.5.1/4.5.2/4.5.3
Cluster B: 1)compiler/gnu/4.3 2) mpi/openmpi/1.4.2-gnu-4.3 3)FFTW 3.3.2
4) GMX 4.5.3

GMX command:
mpiexec -np 8 mdrun -deffnm pre -npme 2 -maxh 0.15 -cpt 5 -cpi pre.cpt
-append

Can anyone provide further help? Thanks a lot!

best regards,



On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> Hi,
>
> as a workaround you could run with -noappend and later
> concatenate the output files. Then you should have no
> problems with locking.
>
> Carsten
>
>
> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>
>   
>> Hi all,
>>
>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about 30% 
>> slower than 4.5.3. So I really appreciate if anyone can help me with it!
>>
>> best regards,
>> Baofu Qiao
>>
>>
>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>> 
>>> Hi all,
>>>
>>> I got the error message when I am extending the simulation using the 
>>> following command:
>>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi pre.cpt 
>>> -append 
>>>
>>> The previous simuluation is succeeded. I wonder why pre.log is locked, and 
>>> the strange warning of "Function not implemented"?
>>>
>>> Any suggestion is appreciated!
>>>
>>> *
>>> Getting Loaded...
>>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>>>
>>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>>>
>>> ---
>>> Program mdrun, VERSION 4.5.3
>>> Source code file: checkpoint.c, line: 1750
>>>
>>> Fatal error:
>>> Failed to lock: pre.log. Function not implemented.
>>> For more information and tips for troubleshooting, please check the GROMACS
>>> website at http://www.gromacs.org/Documentation/Errors
>>> ---
>>>
>>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> Error on node 0, will try to stop all the nodes
>>> Halting parallel program mdrun on CPU 0 out of 64
>>>
>>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> --
>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>> with errorcode -1.
>>>
>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>> You may or may not see output from other processes, depending on
>>> exactly when Open MPI kills them.
>>> --
>>> --
>>> mpiexec has exited due to process rank 0 with PID 32758 on
>>>   



pre.mdp
Description: application/mdp
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Free Energy Calculation: dVpot/dlambda is always zero

2010-11-26 Thread Anirban Ghosh
Hi ALL,

I am trying to run free energy calculation and for that in the md.mdp file I
am keeping the following option:

; Free energy control stuff
free_energy = yes
init_lambda = 0.0
delta_lambda= 0
sc_alpha=0.5
sc-power=1.0
sc-sigma= 0.3


But still I find that in my log file the values for dVpot/dlambda is always
coming to be zero.
What I am doing wrong?
Any suggestion is welcome. Thanks a lot in advance.


Regards,

Anirban
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] system is exploding

2010-11-26 Thread T.M.D. Graen
check your custom topology (this is where the error is 99% of the time), 
use qm minimized starting structures, make sure your structure matches 
your topology (atom names,numbers,ordering, etc.), test single molecules 
in vacuum first and/or reduce the step size of your SD minimizer.


On 11/26/2010 11:47 AM, Baofu Qiao wrote:


If you are really sure about the topology, the problem is the initial
structure. Try to use PackMol to build it.

On 11/26/2010 11:42 AM, Olga Ivchenko wrote:

I tried today to run minimization in vacuum for my small molecules. This has
the same error.

2010/11/26 Baofu Qiao



Have you run the energy minimization (or further simulation to optimize
the structure and test the FF) on the small molecule before you added it
into water?

On 11/26/2010 11:26 AM, Olga Ivchenko wrote:


Dear gromacs users,

I am trying to run simulations for small molecules in water. Topology


files


I created by my self for charm ff. When I am trying to start energy
minimization I got an error:


  Steepest Descents:

Tolerance (Fmax) = 1.0e+00

Number of steps = 1000


That's means my system is exploding. Please can you advice me on this,


what


I need to check.

best,

Olga





--

  Dr. Baofu Qiao
  Institute for Computational Physics
  Universität Stuttgart
  Pfaffenwaldring 27
  70569 Stuttgart

  Tel: +49(0)711 68563607
  Fax: +49(0)711 68563658

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists








--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] system is exploding

2010-11-26 Thread Justin A. Lemkul
Quoting Baofu Qiao :

>
> If you are really sure about the topology, the problem is the initial
> structure. Try to use PackMol to build it.
>

For simple molecules in water, there is no need for a complicated program like
packmol.  Such a configuration can easily be built in Gromacs.

I have yet to see any evidence of an error.  From the original post, it seems
that EM failed to start at all.

-Justin

> On 11/26/2010 11:42 AM, Olga Ivchenko wrote:
> > I tried today to run minimization in vacuum for my small molecules. This
> has
> > the same error.
> >
> > 2010/11/26 Baofu Qiao 
> >
> >
> >> Have you run the energy minimization (or further simulation to optimize
> >> the structure and test the FF) on the small molecule before you added it
> >> into water?
> >>
> >> On 11/26/2010 11:26 AM, Olga Ivchenko wrote:
> >>
> >>> Dear gromacs users,
> >>>
> >>> I am trying to run simulations for small molecules in water. Topology
> >>>
> >> files
> >>
> >>> I created by my self for charm ff. When I am trying to start energy
> >>> minimization I got an error:
> >>>
> >>>
> >>>  Steepest Descents:
> >>>
> >>> Tolerance (Fmax) = 1.0e+00
> >>>
> >>> Number of steps = 1000
> >>>
> >>>
> >>> That's means my system is exploding. Please can you advice me on this,
> >>>
> >> what
> >>
> >>> I need to check.
> >>>
> >>> best,
> >>>
> >>> Olga
> >>>
> >>>
> >>>
> >>
> >> --
> >> 
> >>  Dr. Baofu Qiao
> >>  Institute for Computational Physics
> >>  Universit�t Stuttgart
> >>  Pfaffenwaldring 27
> >>  70569 Stuttgart
> >>
> >>  Tel: +49(0)711 68563607
> >>  Fax: +49(0)711 68563658
> >>
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >>
> >
>
>
> --
> 
>  Dr. Baofu Qiao
>  Institute for Computational Physics
>  Universit�t Stuttgart
>  Pfaffenwaldring 27
>  70569 Stuttgart
>
>  Tel: +49(0)711 68563607
>  Fax: +49(0)711 68563658
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>




Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] [Fwd: charge group radii]

2010-11-26 Thread Gavin Melaugh

--- Begin Message ---
Hi all,

I have recently been testing out the new version of Gromacs. To do so I
have used files from previous simulations in Gromacs-4.0.7. When feeding
the three files (mdp,gro, and top) into grompp, the following note is
displayed:

NOTE 3 [file pbc.mdp]:
  The sum of the two largest charge group radii (0.602793) is larger than
  rlist (1.50) - rvdw (1.40)



This note did not occur in the previous version and it has led to me to
ask some questions.

1)  I had previously hydrocarbon chains (6-7 carbons long) assigned to
individual charge groups, and never any note. Does this mean that my
previous simulations have artefacts? (all charge groups have zero net
charge)
2) If I am happy with the values of rlist and rvdw, will it not be
pretty difficult to assign a charge group with radius < 0.1nm?


Many Thanks

Gavin

--- End Message ---
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Error while using forcefield GROMOS 43a1p

2010-11-26 Thread Jignesh Patel
Dear Justin,

I am trying to do simulation of a system which contains phosphorylated
serine using  GROMOS 43a1p forcefield. While running pdb2gmx command, I am
getting following error.
Fatal error:
Atom N not found in residue seq.nr. 1 while adding improper

thank you in anticipation.

With regards,
Jignesh Patel
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Thermostat for REMD simulations in implicit solvent

2010-11-26 Thread César Ávila
Dear all,
I am trying to set up a REMD simulation for a peptide (CHARMM ff) in
implicit solvent (OBC GB).
Following Bjelkmar et al* I am using stochastics dynamics integration with
an inverse friction constant of 91 ps-1, 5 fs timestep,
virtual-sites for hydrogens, all-vs-all nonbonded (no cutoff).  The full
.mdp file is attached at the end of this mail. The problem I am
facing is that after a while, the temperatures of all replicas start
dropping even below the lowest target temperature.
Would you suggest changing some parameter or the whole thermostat to prevent
this from happening?

@ s0 legend "Temperature"
0.00  310.811523
5.00  435.627045
   10.00  417.161713
   15.00  414.248901
   20.00  399.390686
   25.00  375.087219
   30.00  338.131256
   35.00  339.961151
   40.00  319.424561
   45.00  290.442322
   50.00  289.587921
   55.00  248.746246
   60.00  253.192047
   65.00  242.619476
   70.00  256.051941
   75.00  237.648468
   80.00  231.938690
   85.00  217.029953
   90.00  211.447983
   95.00  210.393890
  100.00  208.518417
  105.00  196.718445
  110.00  219.245682
  115.00  202.957993
  120.00  193.128159
  125.00  198.278198
  130.00  175.304108
  135.00  164.925613
  140.00  195.024490
  145.00  201.153046
  150.00  211.160797
  155.00  189.525085
  160.00  191.156006
  165.00  186.545242
  170.00  186.885422
  175.00  182.838486
  180.00  174.960098
  185.00  175.244049
  190.00  179.517975
  195.00  165.785416
  200.00  189.871048
  205.00  179.510178
  210.00  152.527710
  215.00  160.109955
  220.00  163.564148









* "Implementation of the CHARMM ff in GROMACS" (2010) JCTC, 6, 459-466


; Run parameters
integrator  =  sd
dt  =  0.005; ps !
nsteps  =  2
nstcomm = 1
comm_mode   = angular   ; non-periodic system

; Bond parameters
constraints = all-bonds
constraint_algorithm= lincs
lincs-iter  = 1
lincs-order = 6

; required cutoffs for implicit
nstlist =  0
ns_type =  grid
rlist   =  0
rcoulomb=  0
rvdw=  0
epsilon_rf  =  0
rgbradii=  0

; cutoffs required for qq and vdw
coulombtype =  cut-off
vdwtype =  cut-off

; temperature coupling
tcoupl  = v-rescale
tc-grps = system
tau-t   = 91
ref-t   = 300

; Pressure coupling is off
Pcoupl  = no

; Periodic boundary conditions are off for implicit
pbc = no

; Settings for implicit solvent
implicit_solvent= GBSA
gb_algorithm= OBC
gb_epsilon_solvent  = 78.3
sa_surface_tension  = 2.25936

;Output control
nstxout = 1000
nstfout = 0
nstvout = 0
nstxtcout   = 0
nstlog  = 1000
nstcalcenergy   = -1
nstenergy   = 1000

; GENERATE VELOCITIES FOR STARTUP RUN
gen_vel = yes
gen_temp = 300
gen_seed = 1993
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Roland Schulz
Baofu,

what operating system are you using? On what file system do you try to store
the log file? The error (should) mean that the file system you use doesn't
support locking of files.
Try to store the log file on some other file system. If you want you can
still store the (large) trajectory files on the same file system.

Roland

On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:

> Hi Carsten,
>
> Thanks for your suggestion! But because my simulation will be run for
> about 200ns, 10ns per day(24 hours is the maximum duration for one
> single job on the Cluster I am using), which will generate about 20
> trajectories!
>
> Can anyone find the reason causing such error?
>
> regards,
> Baofu Qiao
>
>
> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> > Hi,
> >
> > as a workaround you could run with -noappend and later
> > concatenate the output files. Then you should have no
> > problems with locking.
> >
> > Carsten
> >
> >
> > On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
> >
> >
> >> Hi all,
> >>
> >> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about
> 30% slower than 4.5.3. So I really appreciate if anyone can help me with it!
> >>
> >> best regards,
> >> Baofu Qiao
> >>
> >>
> >> 于 2010-11-25 20:17, Baofu Qiao 写道:
> >>
> >>> Hi all,
> >>>
> >>> I got the error message when I am extending the simulation using the
> following command:
> >>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
> pre.cpt -append
> >>>
> >>> The previous simuluation is succeeded. I wonder why pre.log is locked,
> and the strange warning of "Function not implemented"?
> >>>
> >>> Any suggestion is appreciated!
> >>>
> >>> *
> >>> Getting Loaded...
> >>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
> >>>
> >>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
> >>>
> >>> ---
> >>> Program mdrun, VERSION 4.5.3
> >>> Source code file: checkpoint.c, line: 1750
> >>>
> >>> Fatal error:
> >>> Failed to lock: pre.log. Function not implemented.
> >>> For more information and tips for troubleshooting, please check the
> GROMACS
> >>> website at http://www.gromacs.org/Documentation/Errors
> >>> ---
> >>>
> >>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
> >>>
> >>> Error on node 0, will try to stop all the nodes
> >>> Halting parallel program mdrun on CPU 0 out of 64
> >>>
> >>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
> >>>
> >>>
> --
> >>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> >>> with errorcode -1.
> >>>
> >>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> >>> You may or may not see output from other processes, depending on
> >>> exactly when Open MPI kills them.
> >>>
> --
> >>>
> --
> >>> mpiexec has exited due to process rank 0 with PID 32758 on
> >>>
> >>>
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >
> >
> >
> >
> >
>
>
> --
> 
>  Dr. Baofu Qiao
>  Institute for Computational Physics
>  Universität Stuttgart
>  Pfaffenwaldring 27
>  70569 Stuttgart
>
>  Tel: +49(0)711 68563607
>  Fax: +49(0)711 68563658
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] PR

2010-11-26 Thread pawan raghav
I have read that SPC/SPCE is an rigid and pre-equilibrated 3 point water
model. Is it likely mean that position restrained dynamics is not required.
If we are not intersted in position restrained dynamics then what are the
criteria for system needed.

-- 
Pawan
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] PR

2010-11-26 Thread Justin A. Lemkul
Quoting pawan raghav :

> I have read that SPC/SPCE is an rigid and pre-equilibrated 3 point water
> model. Is it likely mean that position restrained dynamics is not required.

As soon as you introduce a protein or anything else, this is no longer true.

> If we are not intersted in position restrained dynamics then what are the
> criteria for system needed.
>

Equilibration is successful when the physical observables of interest have
converged.  Without restraints, the protein structure can be subjected to
artifactual forces imparted by solvent reorganization.

-Justin

> --
> Pawan
>




Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Baofu Qiao
Hi Roland,

Thanks a lot!

OS: Scientific Linux 5.5. But the system to store data is called as
WORKSPACE, different from the regular hardware system. Maybe this is the
reason.

I'll try what you suggest!

regards,
Baofu Qiao


On 11/26/2010 04:07 PM, Roland Schulz wrote:
> Baofu,
>
> what operating system are you using? On what file system do you try to store
> the log file? The error (should) mean that the file system you use doesn't
> support locking of files.
> Try to store the log file on some other file system. If you want you can
> still store the (large) trajectory files on the same file system.
>
> Roland
>
> On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:
>
>   
>> Hi Carsten,
>>
>> Thanks for your suggestion! But because my simulation will be run for
>> about 200ns, 10ns per day(24 hours is the maximum duration for one
>> single job on the Cluster I am using), which will generate about 20
>> trajectories!
>>
>> Can anyone find the reason causing such error?
>>
>> regards,
>> Baofu Qiao
>>
>>
>> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
>> 
>>> Hi,
>>>
>>> as a workaround you could run with -noappend and later
>>> concatenate the output files. Then you should have no
>>> problems with locking.
>>>
>>> Carsten
>>>
>>>
>>> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>>>
>>>
>>>   
 Hi all,

 I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about
 
>> 30% slower than 4.5.3. So I really appreciate if anyone can help me with it!
>> 
 best regards,
 Baofu Qiao


 于 2010-11-25 20:17, Baofu Qiao 写道:

 
> Hi all,
>
> I got the error message when I am extending the simulation using the
>   
>> following command:
>> 
> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
>   
>> pre.cpt -append
>> 
> The previous simuluation is succeeded. I wonder why pre.log is locked,
>   
>> and the strange warning of "Function not implemented"?
>> 
> Any suggestion is appreciated!
>
> *
> Getting Loaded...
> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>
> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>
> ---
> Program mdrun, VERSION 4.5.3
> Source code file: checkpoint.c, line: 1750
>
> Fatal error:
> Failed to lock: pre.log. Function not implemented.
> For more information and tips for troubleshooting, please check the
>   
>> GROMACS
>> 
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>
> Error on node 0, will try to stop all the nodes
> Halting parallel program mdrun on CPU 0 out of 64
>
> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>
>
>   
>> --
>> 
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode -1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
>
>   
>> --
>> 
>   
>> --
>> 
> mpiexec has exited due to process rank 0 with PID 32758 on
>
>
>   
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> 
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 
>>>
>>>
>>>
>>>
>>>   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Roland Schulz
Hi Baofu,

could you provide more information about the file system?
The command "mount" provides the file system used. If it is a
network-file-system than the operating system and file system used on the
file server is also of interest.

Roland

On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:

> Hi Roland,
>
> Thanks a lot!
>
> OS: Scientific Linux 5.5. But the system to store data is called as
> WORKSPACE, different from the regular hardware system. Maybe this is the
> reason.
>
> I'll try what you suggest!
>
> regards,
> Baofu Qiao
>
>
> On 11/26/2010 04:07 PM, Roland Schulz wrote:
> > Baofu,
> >
> > what operating system are you using? On what file system do you try to
> store
> > the log file? The error (should) mean that the file system you use
> doesn't
> > support locking of files.
> > Try to store the log file on some other file system. If you want you can
> > still store the (large) trajectory files on the same file system.
> >
> > Roland
> >
> > On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:
> >
> >
> >> Hi Carsten,
> >>
> >> Thanks for your suggestion! But because my simulation will be run for
> >> about 200ns, 10ns per day(24 hours is the maximum duration for one
> >> single job on the Cluster I am using), which will generate about 20
> >> trajectories!
> >>
> >> Can anyone find the reason causing such error?
> >>
> >> regards,
> >> Baofu Qiao
> >>
> >>
> >> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> >>
> >>> Hi,
> >>>
> >>> as a workaround you could run with -noappend and later
> >>> concatenate the output files. Then you should have no
> >>> problems with locking.
> >>>
> >>> Carsten
> >>>
> >>>
> >>> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
> >>>
> >>>
> >>>
>  Hi all,
> 
>  I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is
> about
> 
> >> 30% slower than 4.5.3. So I really appreciate if anyone can help me with
> it!
> >>
>  best regards,
>  Baofu Qiao
> 
> 
>  于 2010-11-25 20:17, Baofu Qiao 写道:
> 
> 
> > Hi all,
> >
> > I got the error message when I am extending the simulation using the
> >
> >> following command:
> >>
> > mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
> >
> >> pre.cpt -append
> >>
> > The previous simuluation is succeeded. I wonder why pre.log is
> locked,
> >
> >> and the strange warning of "Function not implemented"?
> >>
> > Any suggestion is appreciated!
> >
> > *
> > Getting Loaded...
> > Reading file pre.tpr, VERSION 4.5.3 (single precision)
> >
> > Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
> >
> > ---
> > Program mdrun, VERSION 4.5.3
> > Source code file: checkpoint.c, line: 1750
> >
> > Fatal error:
> > Failed to lock: pre.log. Function not implemented.
> > For more information and tips for troubleshooting, please check the
> >
> >> GROMACS
> >>
> > website at http://www.gromacs.org/Documentation/Errors
> > ---
> >
> > "It Doesn't Have to Be Tip Top" (Pulp Fiction)
> >
> > Error on node 0, will try to stop all the nodes
> > Halting parallel program mdrun on CPU 0 out of 64
> >
> > gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
> >
> >
> >
> >>
> --
> >>
> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > with errorcode -1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> >
> >>
> --
> >>
> >
> >>
> --
> >>
> > mpiexec has exited due to process rank 0 with PID 32758 on
> >
> >
> >
>  --
>  gmx-users mailing listgmx-users@gromacs.org
>  http://lists.gromacs.org/mailman/listinfo/gmx-users
>  Please search the archive at
> 
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >>
>  Please don't post (un)subscribe requests to the list. Use the
>  www interface or send it to gmx-users-requ...@gromacs.org.
>  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> 
> >>>
> >>>
> >>>
> >>>
> >>>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Ca

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Baofu Qiao
Hi Roland,

The output of "mount" is :
/dev/mapper/grid01-root on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
172.30.100.254:/home on /home type nfs
(rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.254)
172.30.100.210:/opt on /opt type nfs
(rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
172.30.100.210:/var/spool/torque/server_logs on
/var/spool/pbs/server_logs type nfs
(ro,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
none on /ipathfs type ipathfs (rw)
172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lprod
on /lustre/ws1 type lustre (rw,noatime,nodiratime)
172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lbm
on /lustre/lbm type lustre (rw,noatime,nodiratime)
172.30.100.219:/export/necbm on /nfs/nec type nfs
(ro,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
172.30.100.219:/export/necbm-home on /nfs/nec/home type nfs
(rw,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)


On 11/26/2010 05:41 PM, Roland Schulz wrote:
> Hi Baofu,
>
> could you provide more information about the file system?
> The command "mount" provides the file system used. If it is a
> network-file-system than the operating system and file system used on the
> file server is also of interest.
>
> Roland
>
> On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:
>
>   
>> Hi Roland,
>>
>> Thanks a lot!
>>
>> OS: Scientific Linux 5.5. But the system to store data is called as
>> WORKSPACE, different from the regular hardware system. Maybe this is the
>> reason.
>>
>> I'll try what you suggest!
>>
>> regards,
>> Baofu Qiao
>>
>>
>> On 11/26/2010 04:07 PM, Roland Schulz wrote:
>> 
>>> Baofu,
>>>
>>> what operating system are you using? On what file system do you try to
>>>   
>> store
>> 
>>> the log file? The error (should) mean that the file system you use
>>>   
>> doesn't
>> 
>>> support locking of files.
>>> Try to store the log file on some other file system. If you want you can
>>> still store the (large) trajectory files on the same file system.
>>>
>>> Roland
>>>
>>> On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:
>>>
>>>
>>>   
 Hi Carsten,

 Thanks for your suggestion! But because my simulation will be run for
 about 200ns, 10ns per day(24 hours is the maximum duration for one
 single job on the Cluster I am using), which will generate about 20
 trajectories!

 Can anyone find the reason causing such error?

 regards,
 Baofu Qiao


 On 11/26/2010 09:07 AM, Carsten Kutzner wrote:

 
> Hi,
>
> as a workaround you could run with -noappend and later
> concatenate the output files. Then you should have no
> problems with locking.
>
> Carsten
>
>
> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>
>
>
>   
>> Hi all,
>>
>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is
>> 
>> about
>> 
>> 
 30% slower than 4.5.3. So I really appreciate if anyone can help me with
 
>> it!
>> 
 
>> best regards,
>> Baofu Qiao
>>
>>
>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>>
>>
>> 
>>> Hi all,
>>>
>>> I got the error message when I am extending the simulation using the
>>>
>>>   
 following command:

 
>>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
>>>
>>>   
 pre.cpt -append

 
>>> The previous simuluation is succeeded. I wonder why pre.log is
>>>   
>> locked,
>> 
>>>   
 and the strange warning of "Function not implemented"?

 
>>> Any suggestion is appreciated!
>>>
>>> *
>>> Getting Loaded...
>>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>>>
>>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>>>
>>> ---
>>> Program mdrun, VERSION 4.5.3
>>> Source code file: checkpoint.c, line: 1750
>>>
>>> Fatal error:
>>> Failed to lock: pre.log. Function not implemented.
>>> For more information and tips for troubleshooting, please check the
>>>
>>>   
 GROMACS

 
>>> website at http://www.gromacs.org/Documentation/Errors
>>> ---

Re: [gmx-users] Thermostat for REMD simulations in implicit solvent

2010-11-26 Thread Per Larsson
Hi!

Have never tried remd with implicit solvent, but note that the unit of tau-t in 
the mdp-file is ps, not ps-1. This means you should set tau-t = 0.0109 rather 
than 91.

Try this and see if the problem goes away!

/Per

26 nov 2010 kl. 15:55 skrev César Ávila :

> Dear all, 
> I am trying to set up a REMD simulation for a peptide (CHARMM ff) in implicit 
> solvent (OBC GB). 
> Following Bjelkmar et al* I am using stochastics dynamics integration with an 
> inverse friction constant of 91 ps-1, 5 fs timestep, 
> virtual-sites for hydrogens, all-vs-all nonbonded (no cutoff).  The full .mdp 
> file is attached at the end of this mail. The problem I am 
> facing is that after a while, the temperatures of all replicas start dropping 
> even below the lowest target temperature. 
> Would you suggest changing some parameter or the whole thermostat to prevent 
> this from happening?
> 
> @ s0 legend "Temperature"
> 0.00  310.811523
> 5.00  435.627045
>10.00  417.161713
>15.00  414.248901
>20.00  399.390686
>25.00  375.087219
>30.00  338.131256
>35.00  339.961151
>40.00  319.424561
>45.00  290.442322
>50.00  289.587921
>55.00  248.746246
>60.00  253.192047
>65.00  242.619476
>70.00  256.051941
>75.00  237.648468
>80.00  231.938690
>85.00  217.029953
>90.00  211.447983
>95.00  210.393890
>   100.00  208.518417
>   105.00  196.718445
>   110.00  219.245682
>   115.00  202.957993
>   120.00  193.128159
>   125.00  198.278198
>   130.00  175.304108
>   135.00  164.925613
>   140.00  195.024490
>   145.00  201.153046
>   150.00  211.160797
>   155.00  189.525085
>   160.00  191.156006
>   165.00  186.545242
>   170.00  186.885422
>   175.00  182.838486
>   180.00  174.960098
>   185.00  175.244049
>   190.00  179.517975
>   195.00  165.785416
>   200.00  189.871048
>   205.00  179.510178
>   210.00  152.527710
>   215.00  160.109955
>   220.00  163.564148
> 
> 
> 
> 
> 
> 
> 
> 
> 
> * "Implementation of the CHARMM ff in GROMACS" (2010) JCTC, 6, 459-466
> 
> 
> ; Run parameters
> integrator  =  sd
> dt  =  0.005; ps ! 
> nsteps  =  2
> nstcomm = 1
> comm_mode   = angular   ; non-periodic system
> 
> ; Bond parameters
> constraints = all-bonds
> constraint_algorithm= lincs
> lincs-iter  = 1
> lincs-order = 6
> 
> ; required cutoffs for implicit
> nstlist =  0  
> ns_type =  grid
> rlist   =  0 
> rcoulomb=  0 
> rvdw=  0 
> epsilon_rf  =  0
> rgbradii=  0
> 
> ; cutoffs required for qq and vdw
> coulombtype =  cut-off
> vdwtype =  cut-off
> 
> ; temperature coupling
> tcoupl  = v-rescale
> tc-grps = system
> tau-t   = 91
> ref-t   = 300
> 
> ; Pressure coupling is off
> Pcoupl  = no
> 
> ; Periodic boundary conditions are off for implicit
> pbc = no
> 
> ; Settings for implicit solvent
> implicit_solvent= GBSA
> gb_algorithm= OBC
> gb_epsilon_solvent  = 78.3
> sa_surface_tension  = 2.25936
> 
> ;Output control
> nstxout = 1000
> nstfout = 0
> nstvout = 0
> nstxtcout   = 0
> nstlog  = 1000
> nstcalcenergy   = -1
> nstenergy   = 1000
> 
> ; GENERATE VELOCITIES FOR STARTUP RUN
> gen_vel = yes
> gen_temp = 300
> gen_seed = 1993
> 
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Thermostat for REMD simulations in implicit solvent

2010-11-26 Thread César Ávila
I also noticed this error on my setup, so I changed tau-t to 0.1, which is
commonly found on others setup.
tau-t = 0.0109 seems too low.

2010/11/26 Per Larsson 

> Hi!
>
> Have never tried remd with implicit solvent, but note that the unit of
> tau-t in the mdp-file is ps, not ps-1. This means you should set tau-t =
> 0.0109 rather than 91.
>
> Try this and see if the problem goes away!
>
> /Per
>
> 26 nov 2010 kl. 15:55 skrev César Ávila :
>
> > Dear all,
> > I am trying to set up a REMD simulation for a peptide (CHARMM ff) in
> implicit solvent (OBC GB).
> > Following Bjelkmar et al* I am using stochastics dynamics integration
> with an inverse friction constant of 91 ps-1, 5 fs timestep,
> > virtual-sites for hydrogens, all-vs-all nonbonded (no cutoff).  The full
> .mdp file is attached at the end of this mail. The problem I am
> > facing is that after a while, the temperatures of all replicas start
> dropping even below the lowest target temperature.
> > Would you suggest changing some parameter or the whole thermostat to
> prevent this from happening?
> >
> > @ s0 legend "Temperature"
> > 0.00  310.811523
> > 5.00  435.627045
> >10.00  417.161713
> >15.00  414.248901
> >20.00  399.390686
> >25.00  375.087219
> >30.00  338.131256
> >35.00  339.961151
> >40.00  319.424561
> >45.00  290.442322
> >50.00  289.587921
> >55.00  248.746246
> >60.00  253.192047
> >65.00  242.619476
> >70.00  256.051941
> >75.00  237.648468
> >80.00  231.938690
> >85.00  217.029953
> >90.00  211.447983
> >95.00  210.393890
> >   100.00  208.518417
> >   105.00  196.718445
> >   110.00  219.245682
> >   115.00  202.957993
> >   120.00  193.128159
> >   125.00  198.278198
> >   130.00  175.304108
> >   135.00  164.925613
> >   140.00  195.024490
> >   145.00  201.153046
> >   150.00  211.160797
> >   155.00  189.525085
> >   160.00  191.156006
> >   165.00  186.545242
> >   170.00  186.885422
> >   175.00  182.838486
> >   180.00  174.960098
> >   185.00  175.244049
> >   190.00  179.517975
> >   195.00  165.785416
> >   200.00  189.871048
> >   205.00  179.510178
> >   210.00  152.527710
> >   215.00  160.109955
> >   220.00  163.564148
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > * "Implementation of the CHARMM ff in GROMACS" (2010) JCTC, 6, 459-466
> >
> >
> > ; Run parameters
> > integrator  =  sd
> > dt  =  0.005; ps !
> > nsteps  =  2
> > nstcomm = 1
> > comm_mode   = angular   ; non-periodic system
> >
> > ; Bond parameters
> > constraints = all-bonds
> > constraint_algorithm= lincs
> > lincs-iter  = 1
> > lincs-order = 6
> >
> > ; required cutoffs for implicit
> > nstlist =  0
> > ns_type =  grid
> > rlist   =  0
> > rcoulomb=  0
> > rvdw=  0
> > epsilon_rf  =  0
> > rgbradii=  0
> >
> > ; cutoffs required for qq and vdw
> > coulombtype =  cut-off
> > vdwtype =  cut-off
> >
> > ; temperature coupling
> > tcoupl  = v-rescale
> > tc-grps = system
> > tau-t   = 91
> > ref-t   = 300
> >
> > ; Pressure coupling is off
> > Pcoupl  = no
> >
> > ; Periodic boundary conditions are off for implicit
> > pbc = no
> >
> > ; Settings for implicit solvent
> > implicit_solvent= GBSA
> > gb_algorithm= OBC
> > gb_epsilon_solvent  = 78.3
> > sa_surface_tension  = 2.25936
> >
> > ;Output control
> > nstxout = 1000
> > nstfout = 0
> > nstvout = 0
> > nstxtcout   = 0
> > nstlog  = 1000
> > nstcalcenergy   = -1
> > nstenergy   = 1000
> >
> > ; GENERATE VELOCITIES FOR STARTUP RUN
> > gen_vel = yes
> > gen_temp = 300
> > gen_seed = 1993
> >
> >
> >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Florian Dommert
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

To make things short. The used file system is lustre.

/Flo

On 11/26/2010 05:49 PM, Baofu Qiao wrote:
> Hi Roland,
> 
> The output of "mount" is :
> /dev/mapper/grid01-root on / type ext3 (rw)
> proc on /proc type proc (rw)
> sysfs on /sys type sysfs (rw)
> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
> /dev/md0 on /boot type ext3 (rw)
> tmpfs on /dev/shm type tmpfs (rw)
> none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
> 172.30.100.254:/home on /home type nfs
> (rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.254)
> 172.30.100.210:/opt on /opt type nfs
> (rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
> 172.30.100.210:/var/spool/torque/server_logs on
> /var/spool/pbs/server_logs type nfs
> (ro,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
> none on /ipathfs type ipathfs (rw)
> 172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lprod
> on /lustre/ws1 type lustre (rw,noatime,nodiratime)
> 172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lbm
> on /lustre/lbm type lustre (rw,noatime,nodiratime)
> 172.30.100.219:/export/necbm on /nfs/nec type nfs
> (ro,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
> 172.30.100.219:/export/necbm-home on /nfs/nec/home type nfs
> (rw,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
> 
> 
> On 11/26/2010 05:41 PM, Roland Schulz wrote:
>> Hi Baofu,
>>
>> could you provide more information about the file system?
>> The command "mount" provides the file system used. If it is a
>> network-file-system than the operating system and file system used on the
>> file server is also of interest.
>>
>> Roland
>>
>> On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:
>>
>>   
>>> Hi Roland,
>>>
>>> Thanks a lot!
>>>
>>> OS: Scientific Linux 5.5. But the system to store data is called as
>>> WORKSPACE, different from the regular hardware system. Maybe this is the
>>> reason.
>>>
>>> I'll try what you suggest!
>>>
>>> regards,
>>> Baofu Qiao
>>>
>>>
>>> On 11/26/2010 04:07 PM, Roland Schulz wrote:
>>> 
 Baofu,

 what operating system are you using? On what file system do you try to
   
>>> store
>>> 
 the log file? The error (should) mean that the file system you use
   
>>> doesn't
>>> 
 support locking of files.
 Try to store the log file on some other file system. If you want you can
 still store the (large) trajectory files on the same file system.

 Roland

 On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:


   
> Hi Carsten,
>
> Thanks for your suggestion! But because my simulation will be run for
> about 200ns, 10ns per day(24 hours is the maximum duration for one
> single job on the Cluster I am using), which will generate about 20
> trajectories!
>
> Can anyone find the reason causing such error?
>
> regards,
> Baofu Qiao
>
>
> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
>
> 
>> Hi,
>>
>> as a workaround you could run with -noappend and later
>> concatenate the output files. Then you should have no
>> problems with locking.
>>
>> Carsten
>>
>>
>> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>>
>>
>>
>>   
>>> Hi all,
>>>
>>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is
>>> 
>>> about
>>> 
>>> 
> 30% slower than 4.5.3. So I really appreciate if anyone can help me with
> 
>>> it!
>>> 
> 
>>> best regards,
>>> Baofu Qiao
>>>
>>>
>>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>>>
>>>
>>> 
 Hi all,

 I got the error message when I am extending the simulation using the

   
> following command:
>
> 
 mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi

   
> pre.cpt -append
>
> 
 The previous simuluation is succeeded. I wonder why pre.log is
   
>>> locked,
>>> 
   
> and the strange warning of "Function not implemented"?
>
> 
 Any suggestion is appreciated!

 *
 Getting Loaded...
 Reading file pre.tpr, VERSION 4.5.3 (single precision)

 Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010

 ---
 Program mdrun, VERSION 4.5.3
 Source 

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3): SOLVED

2010-11-26 Thread Baofu Qiao

Hi all,

What Roland said is right! the lustre system causes the problem of 
"lock". Now I copy all the files to a folder of /tmp, then run the 
continuation. It works!


Thanks!

regards,


$于 2010-11-26 22:53, Florian Dommert 写道:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

To make things short. The used file system is lustre.

/Flo

On 11/26/2010 05:49 PM, Baofu Qiao wrote:

Hi Roland,

The output of "mount" is :
/dev/mapper/grid01-root on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
172.30.100.254:/home on /home type nfs
(rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.254)
172.30.100.210:/opt on /opt type nfs
(rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
172.30.100.210:/var/spool/torque/server_logs on
/var/spool/pbs/server_logs type nfs
(ro,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
none on /ipathfs type ipathfs (rw)
172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lprod
on /lustre/ws1 type lustre (rw,noatime,nodiratime)
172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib,172.30.100@tcp:/lbm
on /lustre/lbm type lustre (rw,noatime,nodiratime)
172.30.100.219:/export/necbm on /nfs/nec type nfs
(ro,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
172.30.100.219:/export/necbm-home on /nfs/nec/home type nfs
(rw,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)


On 11/26/2010 05:41 PM, Roland Schulz wrote:

Hi Baofu,

could you provide more information about the file system?
The command "mount" provides the file system used. If it is a
network-file-system than the operating system and file system used on the
file server is also of interest.

Roland

On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:



Hi Roland,

Thanks a lot!

OS: Scientific Linux 5.5. But the system to store data is called as
WORKSPACE, different from the regular hardware system. Maybe this is the
reason.

I'll try what you suggest!

regards,
Baofu Qiao


On 11/26/2010 04:07 PM, Roland Schulz wrote:


Baofu,

what operating system are you using? On what file system do you try to


store


the log file? The error (should) mean that the file system you use


doesn't


support locking of files.
Try to store the log file on some other file system. If you want you can
still store the (large) trajectory files on the same file system.

Roland

On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:




Hi Carsten,

Thanks for your suggestion! But because my simulation will be run for
about 200ns, 10ns per day(24 hours is the maximum duration for one
single job on the Cluster I am using), which will generate about 20
trajectories!

Can anyone find the reason causing such error?

regards,
Baofu Qiao


On 11/26/2010 09:07 AM, Carsten Kutzner wrote:



Hi,

as a workaround you could run with -noappend and later
concatenate the output files. Then you should have no
problems with locking.

Carsten


On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:





Hi all,

I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is


about




30% slower than 4.5.3. So I really appreciate if anyone can help me with


it!




best regards,
Baofu Qiao


于 2010-11-25 20:17, Baofu Qiao 写道:




Hi all,

I got the error message when I am extending the simulation using the



following command:



mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi



pre.cpt -append



The previous simuluation is succeeded. I wonder why pre.log is


locked,




and the strange warning of "Function not implemented"?



Any suggestion is appreciated!

*
Getting Loaded...
Reading file pre.tpr, VERSION 4.5.3 (single precision)

Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010

---
Program mdrun, VERSION 4.5.3
Source code file: checkpoint.c, line: 1750

Fatal error:
Failed to lock: pre.log. Function not implemented.
For more information and tips for troubleshooting, please check the



GROMACS



website at http://www.gromacs.org/Documentation/Errors
---

"It Doesn't Have to Be Tip Top" (Pulp Fiction)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun on CPU 0 out of 64

gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)







--




MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You m

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3): SOLVED

2010-11-26 Thread Roland Schulz
Hi,

we use Lustre too and it doesn't cause any problem. I found this message on
the Lustre list:
http://lists.lustre.org/pipermail/lustre-discuss/2008-May/007366.html

And according to your mount output, lustre on your machine is not mounted
with the flock or localflock option. This seems to be the reason for the
problem. Thus if you would like to run the simulation directly on lustre you
have to ask the sysadmin to mount it with flock or localflock ( I don't
recommend localflock. It doesn't guarantee the correct locking).

If you would like to have an option to disable the locking than please file
a bug report on bugzilla. The reason we lock the logfile is: We want to make
sure that only one simulation is appending to the same files. Otherwise the
files could get corrupted. This is why the locking is on by default and
currently can't be disabled.

Roland


On Fri, Nov 26, 2010 at 3:17 PM, Baofu Qiao  wrote:

> Hi all,
>
> What Roland said is right! the lustre system causes the problem of "lock".
> Now I copy all the files to a folder of /tmp, then run the continuation. It
> works!
>
> Thanks!
>
> regards,
>
>
> $于 2010-11-26 22:53, Florian Dommert 写道:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> To make things short. The used file system is lustre.
>>
>> /Flo
>>
>> On 11/26/2010 05:49 PM, Baofu Qiao wrote:
>>
>>> Hi Roland,
>>>
>>> The output of "mount" is :
>>> /dev/mapper/grid01-root on / type ext3 (rw)
>>> proc on /proc type proc (rw)
>>> sysfs on /sys type sysfs (rw)
>>> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
>>> /dev/md0 on /boot type ext3 (rw)
>>> tmpfs on /dev/shm type tmpfs (rw)
>>> none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
>>> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
>>> 172.30.100.254:/home on /home type nfs
>>>
>>> (rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.254)
>>> 172.30.100.210:/opt on /opt type nfs
>>>
>>> (rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
>>> 172.30.100.210:/var/spool/torque/server_logs on
>>> /var/spool/pbs/server_logs type nfs
>>>
>>> (ro,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
>>> none on /ipathfs type ipathfs (rw)
>>> 172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib
>>> ,172.30.100@tcp:/lprod
>>> on /lustre/ws1 type lustre (rw,noatime,nodiratime)
>>> 172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib
>>> ,172.30.100@tcp:/lbm
>>> on /lustre/lbm type lustre (rw,noatime,nodiratime)
>>> 172.30.100.219:/export/necbm on /nfs/nec type nfs
>>>
>>> (ro,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
>>> 172.30.100.219:/export/necbm-home on /nfs/nec/home type nfs
>>>
>>> (rw,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
>>>
>>>
>>> On 11/26/2010 05:41 PM, Roland Schulz wrote:
>>>
 Hi Baofu,

 could you provide more information about the file system?
 The command "mount" provides the file system used. If it is a
 network-file-system than the operating system and file system used on
 the
 file server is also of interest.

 Roland

 On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:


  Hi Roland,
>
> Thanks a lot!
>
> OS: Scientific Linux 5.5. But the system to store data is called as
> WORKSPACE, different from the regular hardware system. Maybe this is
> the
> reason.
>
> I'll try what you suggest!
>
> regards,
> Baofu Qiao
>
>
> On 11/26/2010 04:07 PM, Roland Schulz wrote:
>
>  Baofu,
>>
>> what operating system are you using? On what file system do you try to
>>
>>  store
>
>  the log file? The error (should) mean that the file system you use
>>
>>  doesn't
>
>  support locking of files.
>> Try to store the log file on some other file system. If you want you
>> can
>> still store the (large) trajectory files on the same file system.
>>
>> Roland
>>
>> On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:
>>
>>
>>
>>  Hi Carsten,
>>>
>>> Thanks for your suggestion! But because my simulation will be run for
>>> about 200ns, 10ns per day(24 hours is the maximum duration for one
>>> single job on the Cluster I am using), which will generate about 20
>>> trajectories!
>>>
>>> Can anyone find the reason causing such error?
>>>
>>> regards,
>>> Baofu Qiao
>>>
>>>
>>> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
>>>
>>>
>>>  Hi,

 as a workaround you could run with -noappend and later
 concatenate the output files. Then you should have no
 problems with locking.

 Carsten


 On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:




>>>

[gmx-users] Re: displacement of drug molecule

2010-11-26 Thread Justin A. Lemkul


I am CC'ing the gmx-users list, as I did on the previous message, so please 
continue the discussion there.


sagar barage wrote:

Dear sir,

 As per your suggestion i have design position restrain file for 
drug but the displacement is occur during position restrained MD 


plz give me any new suggestion


If things are still moving out of place, you're still not properly using this 
position restraint file.  Make sure you're #including the .itp file properly and 
using the proper "define" statement in the .mdp file, if necessary.


If you want suggestions aside from guessing at what you should be doing, please 
include relevant sections of your .top indicating how you're applying the 
posre.itp file and your .mdp file.  Consult the following for tips on proper 
topology organization:


http://www.gromacs.org/Documentation/Errors#Invalid_order_for_directive_defaults

-Justin


--
Sagar H. Barage
sagarbar...@gmail.com 


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] free energy perturbation

2010-11-26 Thread Justin A. Lemkul



antonio wrote:

Dear gromacs users
I am trying to carry out a free energy perturbation to evaluate the
difference in binding energy between two inhibitors of the same protein.
I have read a lot of post but i did no understand how to create the
topology file.
have i to write a mixed pdb of the two inhibitors structure modifying
the value of beta factor to -1;0,1 for disappearing and rising atoms and
than to proceed by pdb2gmx ?
Could someone please link a tutorial or a guide to follow.


Please see the manual, section 5.7.4 "Topologies for free energy calculations," 
which includes a relevant example of a topology for transforming between two 
molecules.  There have been significant changes to the free energy code since 
version 3.3.3, when this type of transformation was (almost) completely 
controlled in the topology, so your mileage may vary.


You do not want (or need) a "mixed" .pdb file.  The B-factor field is irrelevant 
for such applications, and a hybrid structure will probably break pdb2gmx, anyway.


-Justin


thanks in advance
Antonio  



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Discrepancy between -chargegrp and -nochargegrp in simulations with CHARMM ff, Why ?

2010-11-26 Thread Justin A. Lemkul



Francesco Oteri wrote:
To see if the problem is force-field related, you could try to run the 
same simulations using amber-ff.

If you will find the same results, probably is a software bug.



Some Amber parameter sets (Amber94, I think) have issues of being overly 
"helix-friendly," but perhaps other force fields, in general, might make for 
good comparisons.  But then, too, Gromos96 over-stabilizes extended 
conformations, so buyer beware...


Maybe the bug has been introduced in the version 4 when the Domain 
Decomposition has been introduced.

You can check if it is a software problem using the 3.3.3 version.



I doubt that would work.  There have been many code changes in order to 
implement CHARMM.  If anything, try mdrun -pd to use the old particle 
decomposition mode, but I would be very hesitant to blindly say that DD might be 
causing this behavior.


-Justin





Il 25/11/2010 23:42, sa ha scritto:

Dear All,

In a previous message 
(http://lists.gromacs.org/pipermail/gmx-users/2010-November/055839.html), 
I described the results obtained with MD performed with the CHARMM27 
ff and the chargegrp "yes" and "no" options of a peptide in TIP3P 
water simulated gromacs. Since these results puzzled me a lot, i would 
like to share with you others results obtained from the gromacs 
community advices to explain these results.


In few words, the context of these simulations. One of my labmate did, 
8 months ago (march/april), several simulations of a peptide (25 AA) 
with the CHARMM27 ff (and CMAP). The peptide is a transmembrane 
segment (TM) and belongs to a large membrane protein. This TM segment 
has an initial helical conformation. The simulations were performed in 
a cubic box filled with app. 14000 TIP3P water (Jorgensen's model) 
with 2 Cl ions. To construct the topology file of the system, 
-chargegrp "yes" with pdb2gmx and the MD were done with the gromacs 
4.0.5. For some reasons, he had to left the lab, and my boss asked me 
to continue his work. When I checked their results, i was very 
intrigued by these MD results because he found that the peptide keep 
along all the simulation time (100 ns) its initial helical 
conformation. This results are not in agreement with circular 
dichroism experiments which are shown that the same peptide in water 
has no helix segment and is completely unfold. I am aware that the 
simulation time is short compared to experiment time scale, however 
since i haven't seen any unfolding events in this simulation, so I was 
not very confident about these results.


To explain this inconsistency, I have suspected that the error came 
probably of the use of the default -chargegrp with CHARMM ff in these 
simulations since i have read several recent threads about the charge 
groups problems in the CHARMM ff implementation in gromacs. To examine 
this hypothesis I have done two simulations with last gromacs version 
(4.5.3) and two top files containing charge groups and no charge 
groups for the peptide residus. I used  the *same* initial pdb file, 
box size and simulations parameters. The two simulations were carried 
out during 24 ns in the NPT ensemble with the md.mdp parameters 
described below after energy minimisation, NVT and NPT equilibration 
steps.


constraints = all-bonds
integrator  = md
nsteps  = 1200   ; 24000ps ou 24ns
dt  = 0.002

nstlist = 10
nstcalcenergy   = 10
nstcomm = 10

continuation= no; Restarting after NPT
vdw-type= cut-off
rvdw= 1.0
rlist   = 0.9
coulombtype  = PME
rcoulomb = 0.9
fourierspacing   = 0.12
fourier_nx   = 0
fourier_ny   = 0
fourier_nz   = 0
pme_order= 4
ewald_rtol   = 1e-05
optimize_fft= yes

nstvout = 5
nstxout = 5
nstenergy   = 2
nstlog  = 5000  ; update log file every 10 ps
nstxtcout   = 1000 ; frequency to write coordinates to xtc 
trajectory every 2 ps


Tcoupl  = nose-hoover
tc-grps = Protein Non-Protein
tau-t   = 0.4 0.4
ref-t   = 298 298
; Pressure coupling is on
Pcoupl  = Parrinello-Rahman
pcoupltype  = isotropic
tau_p   = 3.0
compressibility = 4.5e-5
ref_p   = 1.0135
gen_vel = no

I found that with charge groups, the peptide remains in its initial 
helical conformation, whereas with no charge group, the peptide 
unfolds quickly and has a random coil conformation. I have shown these 
results to my boss but I was not able to explain why we observe these 
differences between the two simulations. Indeed since i use PME in the 
MD, chargegroup should not affect the dynamic results (correct ?) . He 
asked to do others simulations with different versions of gromacs to 
see if is not a bug with charge group implementation in gromacs. For 
testing i have done four others MD wit the *same*

Re: [gmx-users] Free Energy Calculation: dVpot/dlambda is always zero

2010-11-26 Thread Justin A. Lemkul



Anirban Ghosh wrote:


Hi ALL,

I am trying to run free energy calculation and for that in the md.mdp 
file I am keeping the following option:


; Free energy control stuff
free_energy = yes
init_lambda = 0.0
delta_lambda= 0
sc_alpha=0.5
sc-power=1.0
sc-sigma= 0.3


But still I find that in my log file the values for dVpot/dlambda is 
always coming to be zero.

What I am doing wrong?
Any suggestion is welcome. Thanks a lot in advance.



You haven't indicated your Gromacs version, but assuming you're using something 
in the 4.x series, you're not specifying the necessary parameters to do any sort 
of transformation, particularly couple_lambda0 and couple_lambda1.  If left at 
their default values (vdw-q), nothing gets decoupled.


-Justin



Regards,

Anirban



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Phosphorylated Serine in charmm

2010-11-26 Thread Justin A. Lemkul



Yasmine Chebaro wrote:

Hello all,
I am using charmm ff in Gromacs 4.5.2, everything goes right with 
standard proteins, but now

i want to run a simulation on a protein with a phosphorylated residue.
As mentionned in this post 
http://www.mail-archive.com/gmx-users@gromacs.org/msg35532.html,
I changed the rtp and hbd to add a specific section for the 
phosphorylated amino-acid, having checked

the parametres with charmm.
I still have the problem in pdb2gmx where it seems like he can't see the 
new definition and gives me

the residue topology database error.


The exact error message and what you have added to the .rtp and .hdb files would 
be very helpful (read: necessary) to give any useful advice.


Is there another file where I have to specify the new amino-acids, I 
search all the files in the charmm
directory in gromacs top, but I still can't find another place where 
amino-acids are defined.


You will need to add your residue to residuetypes.dat as well.

-Justin


Thanks for you help



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Error while using forcefield GROMOS 43a1p

2010-11-26 Thread Justin A. Lemkul



Jignesh Patel wrote:

Dear Justin,

I am trying to do simulation of a system which contains phosphorylated 
serine using  GROMOS 43a1p forcefield. While running pdb2gmx command, I 
am getting following error.

Fatal error:
Atom N not found in residue seq.nr . 1 while adding improper



Well, either the N atom of residue 1 is not present in your .pdb file (in which 
case you've got a broken structure that needs fixing), or something else is 
going on.  Without seeing the contents of your input coordinate file (just the 
first residue, really) and your pdb2gmx command line, there's not much help 
anyone can give you.


-Justin


thank you in anticipation.

With regards,
Jignesh Patel



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Free Energy Calculation: dVpot/dlambda is always zero

2010-11-26 Thread Anirban Ghosh
Hello Justin,

Thanks a lot for the reply.
Yes, I am using GROAMCS 4.5 and my system consists of two chains of two
proteins, a substrate and an inhibitor solvated in water. So can you please
tell me what should be the values for:
couple-moltypecouple-lambda0couple-intramolThanks a lot again.


Regards,

Anirban

On Sat, Nov 27, 2010 at 9:15 AM, Justin A. Lemkul  wrote:

>
>
> Anirban Ghosh wrote:
>
>>
>> Hi ALL,
>>
>> I am trying to run free energy calculation and for that in the md.mdp file
>> I am keeping the following option:
>>
>> ; Free energy control stuff
>> free_energy = yes
>> init_lambda = 0.0
>> delta_lambda= 0
>> sc_alpha=0.5
>> sc-power=1.0
>> sc-sigma= 0.3
>>
>>
>> But still I find that in my log file the values for dVpot/dlambda is
>> always coming to be zero.
>> What I am doing wrong?
>> Any suggestion is welcome. Thanks a lot in advance.
>>
>>
> You haven't indicated your Gromacs version, but assuming you're using
> something in the 4.x series, you're not specifying the necessary parameters
> to do any sort of transformation, particularly couple_lambda0 and
> couple_lambda1.  If left at their default values (vdw-q), nothing gets
> decoupled.
>
> -Justin
>
>
>> Regards,
>>
>> Anirban
>>
>>
> --
> 
>
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> MILES-IGERT Trainee
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists