[gmx-users] where can I download POPC membrane file?

2011-05-29 Thread albert
Dear all:

  I would like to use charmm36 and POPC for membrane protein simulation. and I 
am wondering where can I download charmm36 pre-pribriumed POPC PDB and topol 
file for gromacs?

Thank you very much
Best
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re:Re: [gmx-users] where can I download POPC membrane file?

2011-05-29 Thread albert
But I don't think it is pre-equilibrium POPC membrane.. and more over, the 
position from VMD is not pre-aligned with OPM database. It would be a great 
problem for putting our protein in the membrane..







At 2011-05-30,"Sergio Manzetti"  wrote:
You can build it using VMD (VIsual Molecular Dynamics)




2011/5/30 albert
Dear all:

  I would like to use charmm36 and POPC for membrane protein simulation. and I 
am wondering where can I download charmm36 pre-pribriumed POPC PDB and topol 
file for gromacs?

Thank you very much
Best

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive athttp://www.gromacs.org/Support/Mailing_Lists/Search 
before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it togmx-users-requ...@gromacs.org.
Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] is it possible to convert NAMD psf file to gromacas format?

2011-05-29 Thread albert
Hello:
 I am wondering, is it possible to convert NAMD topol psf file into Gromacs 
topol format?

Thank you very much
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re:Re: [gmx-users] is it possible to convert NAMD psf file to gromacas format?

2011-05-29 Thread albert
Thank you very much for kind advices. Here is some warning, and I don't know 
whether there would be some problem or not:

; 'fake' gromacs topology generated from topotools.
; WARNING| the purpose of this topology is to allow using the  |WARNING
; WARNING| analysis tools from gromacs for non gromacs data.   |WARNING
; WARNING| it cannot be used for a simulation. |WARNING






At 2011-05-30,"Francesco Oteri"  wrote:

>Il 29/05/2011 21:58, albert ha scritto:
>> Hello:
>> I am wondering, is it possible to convert NAMD topol psf file into 
>> Gromacs topol format?
>>
>> Thank you very much
>
>Hi albert,
>you can try with the following commands:
>
>vmd .psf .pdb
>topo writegmxtop output.top
>
>I recently tried with vmd1.9
>
>-- 
>gmx-users mailing listgmx-users@gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>Please don't post (un)subscribe requests to the list. Use the 
>www interface or send it to gmx-users-requ...@gromacs.org.
>Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re:Re: [gmx-users] is it possible to convert NAMD psf file to gromacas format?

2011-05-29 Thread albert
Thank you very much for kind messages.
I am trying to convert a membrane system psf file for gromcas MD simulation. 
For I would like to use CHARMM36 for my POPC system, but I cannot find 
pre-equilibrium CAHRMM36 based POPC system. However, there is some for NAMD and 
I download the pdf and psf file hoping that it could be converted to related 
gromacs format.

Do you have any idea about this?

THX




At 2011-05-30,"Francesco Oteri"  wrote:

>Topology file is suitable for analysis. I succesfully used the .top to 
>analyse hydrogen bond and salt-bridges.
>I don't know if problems would arise for simulation.
>
>
>Il 29/05/2011 22:10, albert ha scritto:
>> Thank you very much for kind advices. Here is some warning, and I 
>> don't know whether there would be some problem or not:
>>
>> ; 'fake' gromacs topology generated from topotools.
>> ; WARNING| the purpose of this topology is to allow using the |WARNING
>> ; WARNING| analysis tools from gromacs for non gromacs data. |WARNING
>> ; WARNING| it cannot be used for a simulation. |WARNING
>>
>>
>>
>> At 2011-05-30,"Francesco Oteri"  wrote:
>>
>> >Il 29/05/2011 21:58, albert ha scritto:
>> >>  Hello:
>> >>  I am wondering, is it possible to convert NAMD topol psf file into
>> >>  Gromacs topol format?
>> >>
>> >>  Thank you very much
>> >
>> >Hi albert,
>> >you can try with the following commands:
>> >
>> >vmd .psf .pdb
>> >topo writegmxtop output.top
>> >
>> >I recently tried with vmd1.9
>> >
>> >-- 
>> >gmx-users mailing listgmx-users@gromacs.org
>> >http://lists.gromacs.org/mailman/listinfo/gmx-users
>> >Please search the archive at 
>> >http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> >Please don't post (un)subscribe requests to the list. Use the
>> >www interface or send it to gmx-users-requ...@gromacs.org.
>> >Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>-- 
>gmx-users mailing listgmx-users@gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>Please don't post (un)subscribe requests to the list. Use the 
>www interface or send it to gmx-users-requ...@gromacs.org.
>Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re:Re: [gmx-users] is it possible to convert NAMD psf file to gromacas format?

2011-05-29 Thread albert
Well, I also try to do this. But it seem that the atom name in my POPC pdb file 
(which I download from here 
http://terpconnect.umd.edu/~jbklauda/research/download.html ) is different from 
the the one in Gromacs topol database. There are 72 lips in the system in all. 
So, it would be very difficult to modify them one by one.

Thank you very much




At 2011-05-30,"Francesco Oteri"  wrote:

>You can solve the problem without converting from namd to gromacs.
>You can use the pdb you've already found to obtain a valid gromacs 
>topology through pdb2gmx
>
>Il 29/05/2011 22:24, albert ha scritto:
>> Thank you very much for kind messages.
>> I am trying to convert a membrane system psf file for gromcas MD 
>> simulation. For I would like to use CHARMM36 for my POPC system, but I 
>> cannot find pre-equilibrium CAHRMM36 based POPC system. However, there 
>> is some for NAMD and I download the pdf and psf file hoping that it 
>> could be converted to related gromacs format.
>>
>> Do you have any idea about this?
>>
>> THX
>>
>> At 2011-05-30,"Francesco Oteri"  wrote:
>>
>> >Topology file is suitable for analysis. I succesfully used the .top to
>> >analyse hydrogen bond and salt-bridges.
>> >I don't know if problems would arise for simulation.
>> >
>> >
>> >Il 29/05/2011 22:10, albert ha scritto:
>> >>  Thank you very much for kind advices. Here is some warning, and I
>> >>  don't know whether there would be some problem or not:
>> >>
>> >>  ; 'fake' gromacs topology generated from topotools.
>> >>  ; WARNING| the purpose of this topology is to allow using the |WARNING
>> >>  ; WARNING| analysis tools from gromacs for non gromacs data. |WARNING
>> >>  ; WARNING| it cannot be used for a simulation. |WARNING
>> >>
>> >>
>> >>
>> >>  At 2011-05-30,"Francesco Oteri"   wrote:
>> >>
>> >>  >Il 29/05/2011 21:58, albert ha scritto:
>> >>  >>   Hello:
>> >>  >>   I am wondering, is it possible to convert NAMD topol psf file into
>> >>  >>   Gromacs topol format?
>> >>  >>
>> >>  >>   Thank you very much
>> >>  >
>> >>  >Hi albert,
>> >>  >you can try with the following commands:
>> >>  >
>> >>  >vmd .psf .pdb
>> >>  >topo writegmxtop output.top
>> >>  >
>> >>  >I recently tried with vmd1.9
>> >>  >
>> >>  >-- 
>> >>  >gmx-users mailing listgmx-users@gromacs.org
>> >>  >http://lists.gromacs.org/mailman/listinfo/gmx-users
>> >>  >Please search the archive at 
>> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> >>  >Please don't post (un)subscribe requests to the list. Use the
>> >>  >www interface or send it to gmx-users-requ...@gromacs.org.
>> >>  >Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> >-- 
>> >gmx-users mailing listgmx-users@gromacs.org
>> >http://lists.gromacs.org/mailman/listinfo/gmx-users
>> >Please search the archive at 
>> >http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> >Please don't post (un)subscribe requests to the list. Use the
>> >www interface or send it to gmx-users-requ...@gromacs.org.
>> >Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>-- 
>gmx-users mailing listgmx-users@gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>Please don't post (un)subscribe requests to the list. Use the 
>www interface or send it to gmx-users-requ...@gromacs.org.
>Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re:Re: [gmx-users] is it possible to convert NAMD psf file to gromacas format?

2011-05-29 Thread albert
Thank you very much for your kind reply.
The problem is that there are too many atom names for 72 full atom lips and it 
can make mistakes easily.
If a rename atom A into B, it will mix the old atom B which already there 
before A renamed into B. However, if the old atom B also need to be renamed 
into C. Here is the problem , command cannot recognize this atom B is the new 
generated or the old atom B. Of course, those atom B derive from A should not 
be renamed into C.

If there is only dozens of atoms name, it would be ok modify them manually. But 
if there are thousands, it would be a big problem to do so.


THX




At 2011-05-30,"Francesco Oteri"  wrote:

>I guess it is tedious but, in my opinion it is more correct changing the 
>atom name in the pdb and using gromacs topology generation tools. So you 
>are sure the topology will be suitable for gromacs simulation.
>
>You rename atom, using the command sed.
>In particular:
>
>sed "s/old/new/g" file
>
>replaces each occurence of "old" with "new". Once you find the 
>correspondenze between gromacs and pdb atom name, you can solve the 
>problem.
>
>
>Alternatively,you can replace atom name using some text file editor.
>
>
>
>
>Il 29/05/2011 22:35, albert ha scritto:
>> Well, I also try to do this. But it seem that the atom name in my POPC 
>> pdb file (which I download from here 
>> http://terpconnect.umd.edu/~jbklauda/research/download.html ) is 
>> different from the the one in Gromacs topol database. There are 72 
>> lips in the system in all. So, it would be very difficult to modify 
>> them one by one.
>>
>> Thank you very much
>>
>> At 2011-05-30,"Francesco Oteri"  wrote:
>>
>> >You can solve the problem without converting from namd to gromacs.
>> >You can use the pdb you've already found to obtain a valid gromacs
>> >topology through pdb2gmx
>> >
>> >Il 29/05/2011 22:24, albert ha scritto:
>> >>  Thank you very much for kind messages.
>> >>  I am trying to convert a membrane system psf file for gromcas MD
>> >>  simulation. For I would like to use CHARMM36 for my POPC system, but I
>> >>  cannot find pre-equilibrium CAHRMM36 based POPC system. However, there
>> >>  is some for NAMD and I download the pdf and psf file hoping that it
>> >>  could be converted to related gromacs format.
>> >>
>> >>  Do you have any idea about this?
>> >>
>> >>  THX
>> >>
>> >>  At 2011-05-30,"Francesco Oteri"   wrote:
>> >>
>> >>  >Topology file is suitable for analysis. I succesfully used the .top to
>> >>  >analyse hydrogen bond and salt-bridges.
>> >>  >I don't know if problems would arise for simulation.
>> >>  >
>> >>  >
>> >>  >Il 29/05/2011 22:10, albert ha scritto:
>> >>  >>   Thank you very much for kind advices. Here is some warning, and I
>> >>  >>   don't know whether there would be some problem or not:
>> >>  >>
>> >>  >>   ; 'fake' gromacs topology generated from topotools.
>> >>  >>   ; WARNING| the purpose of this topology is to allow using the 
>> >> |WARNING
>> >>  >>   ; WARNING| analysis tools from gromacs for non gromacs data. 
>> >> |WARNING
>> >>  >>   ; WARNING| it cannot be used for a simulation. |WARNING
>> >>  >>
>> >>  >>
>> >>  >>
>> >>  >>   At 2011-05-30,"Francesco Oteri"wrote:
>> >>  >>
>> >>  >>   >Il 29/05/2011 21:58, albert ha scritto:
>> >>  >>   >>Hello:
>> >>  >>   >>I am wondering, is it possible to convert NAMD topol psf file 
>> >> into
>> >>  >>   >>Gromacs topol format?
>> >>  >>   >>
>> >>  >>   >>Thank you very much
>> >>  >>   >
>> >>  >>   >Hi albert,
>> >>  >>   >you can try with the following commands:
>> >>  >>   >
>> >>  >>   >vmd .psf .pdb
>> >>  >>   >topo writegmxtop output.top
>> >>  >>   >
>> >>  >>   >I recently tried with vmd1.9
>> >>  >>   >
>> >>  >>   >-- 
>> >>  >>   >gmx-users mailing listgmx-users@gromacs.org
>> >>  >>   >http://lists.groma

Re:Re: Re: [gmx-users] is it possible to convert NAMD psf file to gromacas format?

2011-05-29 Thread albert
Thank you so much for your kind helps. did you pre-equilibrium it?

At 2011-05-30,"Jianguo Li"  wrote:

Hi Albert,

Here is one gro file of 128 POPC lipids which I constructed before using 
CHARMM36 FF. please check if it is correct before using it.

Jianguo
 



From: albert 
To: Discussion list for GROMACS users 
Sent: Monday, 30 May 2011 14:12:19
Subject: Re:Re: [gmx-users] is it possible to convert NAMD psf file to gromacas 
format?

Thank you very much for your kind reply.
The problem is that there are too many atom names for 72 full atom lips and it 
can make mistakes easily.
If a rename atom A into B, it will mix the old atom B which already there 
before A renamed into B. However, if the old atom B also need to be renamed 
into C. Here is the problem , command cannot recognize this atom B is the new 
generated or the old atom B. Of course, those atom B derive from A should not 
be renamed into C.

If there is only dozens of atoms name, it would be ok modify them manually. But 
if there are thousands, it would be a big problem to do so.


THX




At 2011-05-30,"Francesco Oteri"  wrote:

>I guess it is tedious but, in my opinion it is more correct changing the 
>atom name in the pdb and using gromacs topology generation tools. So you 
>are sure the topology will be suitable for gromacs simulation.
>
>You rename atom, using the command sed.
>In particular:
>
>sed "s/old/new/g" file
>
>replaces each occurence of "old" with "new". Once you find the 
>correspondenze between gromacs and pdb atom name, you can solve the 
>problem.
>
>
>Alternatively,you can replace atom name using some text file editor.
>
>
>
>
>Il 29/05/2011 22:35, albert ha scritto:
>> Well, I also try to do this. But it seem that the atom name in my POPC 
>> pdb file (which I download from here 
>> http://terpconnect.umd.edu/~jbklauda/research/download.html ) is 
>> different from the the one in Gromacs topol database. There are 72 
>> lips in the system in all. So, it would be very difficult to modify 
>> them one by one.
>>
>> Thank you very much
>>
>> At 2011-05-30,"Francesco Oteri"  wrote:
>>
>> >You can solve the problem without converting from namd to gromacs.
>> >You can use the pdb you've already found to obtain a valid gromacs
>> >topology through pdb2gmx
>> >
>> >Il 29/05/2011 22:24, albert ha scritto:
>> >>  Thank you very much for kind messages.
>> >>  I am trying to convert a membrane system psf file for gromcas MD
>> >>  simulation. For I would like to use CHARMM36 for my POPC system, but I
>> >>  cannot find pre-equilibrium CAHRMM36 based POPC system. However, there
>> >>  is some for NAMD and I download the pdf and psf file hoping that it
>> >>  could be converted to related gromacs format.
>> >>
>> >>  Do you have any idea about this?
>> >>
>> >>  THX
>> >>
>> >>  At 2011-05-30,"Francesco Oteri"   wrote:
>> >>
>> >>  >Topology file is suitable for analysis. I succesfully used the .top to
>> >>  >analyse hydrogen bond and salt-bridges.
>> >>  >I don't know if problems would arise for simulation.
>> >>  >
>> >>  >
>> >>  >Il 29/05/2011 22:10, albert ha scritto:
>> >>  >>   Thank you very much for kind advices. Here is some warning, and I
>> >>  >>   don't know whether there would be some problem or not:
>> >>  >>
>> >>  >>   ; 'fake' gromacs topology generated from topotools.
>> >>  >>   ; WARNING| the purpose of this topology is to allow using the 
>> >> |WARNING
>> >>  >>   ; WARNING| analysis tools from gromacs for non gromacs data. 
>> >> |WARNING
>> >>  >>   ; WARNING| it cannot be used for a simulation. |WARNING
>> >>  >>
>> >>  >>
>> >>  >>
>> >>  >>   At 2011-05-30,"Francesco Oteri"wrote:
>> >>  >>
>> >>  >>   >Il 29/05/2011 21:58, albert ha scritto:
>> >>  >>   >>Hello:
>> >>  >>   >>I am wondering, is it possible to convert NAMD topol psf file 
>> >> into
>> >>  >>   >>Gromacs topol format?
>> >>  >>   >>
>> >>  >>   >>Thank you very much
>> >>  >>   >
>> >>  >>   >Hi albert,
>> >>  >>   >you can try with the following commands:

Re:Re: Re: Re: [gmx-users] is it possible to convert NAMD psf file to gromacas format?

2011-05-30 Thread albert
Thank you so much for your such kind helps. I will try it.

At 2011-05-30,"Jianguo Li"  wrote:

I equilibrated the system for about 20ns at 300K.
Jianguo



From: albert 
To: Jianguo Li 
Cc: Discussion list for GROMACS users 
Sent: Monday, 30 May 2011 14:52:23
Subject: Re:Re: Re: [gmx-users] is it possible to convert NAMD psf file to 
gromacas format?

Thank you so much for your kind helps. did you pre-equilibrium it?

At 2011-05-30,"Jianguo Li"  wrote:

Hi Albert,

Here is one gro file of 128 POPC lipids which I constructed before using 
CHARMM36 FF. please check if it is correct before using it.

Jianguo
 



From: albert 
To: Discussion list for GROMACS users 
Sent: Monday, 30 May 2011 14:12:19
Subject: Re:Re: [gmx-users] is it possible to convert NAMD psf file to gromacas 
format?

Thank you very much for your kind reply.
The problem is that there are too many atom names for 72 full atom lips and it 
can make mistakes easily.
If a rename atom A into B, it will mix the old atom B which already there 
before A renamed into B. However, if the old atom B also need to be renamed 
into C. Here is the problem , command cannot recognize this atom B is the new 
generated or the old atom B. Of course, those atom B derive from A should not 
be renamed into C.

If there is only dozens of atoms name, it would be ok modify them manually. But 
if there are thousands, it would be a big problem to do so.


THX




At 2011-05-30,"Francesco Oteri"   wrote:

>I guess it is tedious but, in my opinion it is more correct changing the 
>atom name in the pdb and using gromacs topology generation tools. So you 
>are sure the topology will be suitable for gromacs simulation.
>
>You rename atom, using the command sed.
>In particular:
>
>sed "s/old/new/g" file
>
>replaces each occurence of "old" with "new". Once you find the 
>correspondenze between gromacs and pdb atom name, you can solve the 
>problem.
>
>
>Alternatively,you can replace atom name using some text file editor.
>
>
>
>
>Il 29/05/2011 22:35, albert ha scritto:
>> Well, I also try to do this. But it seem that the atom name in my POPC 
>> pdb file (which I download from here 
>> http://terpconnect.umd.edu/~jbklauda/research/download.html ) is 
>> different from the the one in Gromacs topol database. There are 72 
>> lips in the system in all. So, it would be very difficult to modify 
>> them one by one.
>>
>> Thank you very much
>>
>> At 2011-05-30,"Francesco Oteri"   wrote:
>>
>> >You can solve the problem without converting from namd to gromacs.
>> >You can use the pdb you've already found to obtain a valid gromacs
>> >topology through pdb2gmx
>> >
>> >Il 29/05/2011 22:24, albert ha scritto:
>> >>  Thank you very much for kind messages.
>> >>  I am trying to convert a membrane system psf file for gromcas MD
>> >>  simulation. For I would like to use CHARMM36 for my POPC system, but I
>> >>  cannot find pre-equilibrium CAHRMM36 based POPC system. However, there
>> >>  is some for NAMD and I download the pdf and psf file hoping that it
>> >>  could be converted to related gromacs format.
>> >>
>> >>  Do you have any idea about this?
>> >>
>> >>  THX
>> >>
>> >>  At 2011-05-30,"Francesco Oteri"wrote:
>> >>
>> >>  >Topology file is suitable for analysis. I succesfully used the .top to
>> >>  >analyse hydrogen bond and salt-bridges.
>> >>  >I don't know if problems would arise for simulation.
>> >>  >
>> >>  >
>> >>  >Il 29/05/2011 22:10, albert ha scritto:
>> >>  >>   Thank you very much for kind advices. Here is some warning, and I
>> >>  >>   don't know whether there would be some problem or not:
>> >>  >>
>> >>  >>   ; 'fake' gromacs topology generated from topotools.
>> >>  >>   ; WARNING| the purpose of this topology is to allow using the 
>> >> |WARNING
>> >>  >>   ; WARNING| analysis tools from gromacs for non gromacs data. 
>> >> |WARNING
>> >>  >>   ; WARNING| it cannot be used for a simulation. |WARNING
>> >>  >>
>> >>  >>
>> >>  >>
>> >>  >>   At 2011-05-30,"Francesco Oteri" wrote:
>> >>  >>
>> >>  >>   >Il 29/05/2011 21:58, albert ha scritto:
>> >>  >>   >>Hello:
>> >>  >>   >>I am wondering, is it possibl

Re:Re: Re: Re: [gmx-users] is it possible to convert NAMD psf file to gromacas format?

2011-05-30 Thread albert
Thank you so much for kind helps. I will try it

At 2011-05-30,"Jianguo Li"  wrote:

I equilibrated the system for about 20ns at 300K.
Jianguo



From: albert 
To: Jianguo Li 
Cc: Discussion list for GROMACS users 
Sent: Monday, 30 May 2011 14:52:23
Subject: Re:Re: Re: [gmx-users] is it possible to convert NAMD psf file to 
gromacas format?

Thank you so much for your kind helps. did you pre-equilibrium it?

At 2011-05-30,"Jianguo Li"  wrote:

Hi Albert,

Here is one gro file of 128 POPC lipids which I constructed before using 
CHARMM36 FF. please check if it is correct before using it.

Jianguo
 



From: albert 
To: Discussion list for GROMACS users 
Sent: Monday, 30 May 2011 14:12:19
Subject: Re:Re: [gmx-users] is it possible to convert NAMD psf file to gromacas 
format?

Thank you very much for your kind reply.
The problem is that there are too many atom names for 72 full atom lips and it 
can make mistakes easily.
If a rename atom A into B, it will mix the old atom B which already there 
before A renamed into B. However, if the old atom B also need to be renamed 
into C. Here is the problem , command cannot recognize this atom B is the new 
generated or the old atom B. Of course, those atom B derive from A should not 
be renamed into C.

If there is only dozens of atoms name, it would be ok modify them manually. But 
if there are thousands, it would be a big problem to do so.


THX




At 2011-05-30,"Francesco Oteri"   wrote:

>I guess it is tedious but, in my opinion it is more correct changing the 
>atom name in the pdb and using gromacs topology generation tools. So you 
>are sure the topology will be suitable for gromacs simulation.
>
>You rename atom, using the command sed.
>In particular:
>
>sed "s/old/new/g" file
>
>replaces each occurence of "old" with "new". Once you find the 
>correspondenze between gromacs and pdb atom name, you can solve the 
>problem.
>
>
>Alternatively,you can replace atom name using some text file editor.
>
>
>
>
>Il 29/05/2011 22:35, albert ha scritto:
>> Well, I also try to do this. But it seem that the atom name in my POPC 
>> pdb file (which I download from here 
>> http://terpconnect.umd.edu/~jbklauda/research/download.html ) is 
>> different from the the one in Gromacs topol database. There are 72 
>> lips in the system in all. So, it would be very difficult to modify 
>> them one by one.
>>
>> Thank you very much
>>
>> At 2011-05-30,"Francesco Oteri"   wrote:
>>
>> >You can solve the problem without converting from namd to gromacs.
>> >You can use the pdb you've already found to obtain a valid gromacs
>> >topology through pdb2gmx
>> >
>> >Il 29/05/2011 22:24, albert ha scritto:
>> >>  Thank you very much for kind messages.
>> >>  I am trying to convert a membrane system psf file for gromcas MD
>> >>  simulation. For I would like to use CHARMM36 for my POPC system, but I
>> >>  cannot find pre-equilibrium CAHRMM36 based POPC system. However, there
>> >>  is some for NAMD and I download the pdf and psf file hoping that it
>> >>  could be converted to related gromacs format.
>> >>
>> >>  Do you have any idea about this?
>> >>
>> >>  THX
>> >>
>> >>  At 2011-05-30,"Francesco Oteri"wrote:
>> >>
>> >>  >Topology file is suitable for analysis. I succesfully used the .top to
>> >>  >analyse hydrogen bond and salt-bridges.
>> >>  >I don't know if problems would arise for simulation.
>> >>  >
>> >>  >
>> >>  >Il 29/05/2011 22:10, albert ha scritto:
>> >>  >>   Thank you very much for kind advices. Here is some warning, and I
>> >>  >>   don't know whether there would be some problem or not:
>> >>  >>
>> >>  >>   ; 'fake' gromacs topology generated from topotools.
>> >>  >>   ; WARNING| the purpose of this topology is to allow using the 
>> >> |WARNING
>> >>  >>   ; WARNING| analysis tools from gromacs for non gromacs data. 
>> >> |WARNING
>> >>  >>   ; WARNING| it cannot be used for a simulation. |WARNING
>> >>  >>
>> >>  >>
>> >>  >>
>> >>  >>   At 2011-05-30,"Francesco Oteri" wrote:
>> >>  >>
>> >>  >>   >Il 29/05/2011 21:58, albert ha scritto:
>> >>  >>   >>Hello:
>> >>  >>   >>I am wondering, is it possible to con

Re:AW: Re: [gmx-users] where can I download POPC membrane file?

2011-05-30 Thread albert
I know this, but this file cannot be used because the atom name is quite 
different from gromacs CHARMM36 topol library.



At 2011-05-30,"Rausch, Felix"  wrote:

Check this link given by another (unknown) mailing list user yesterday (Topic 
name:about POPC in Gromacs )!
 
http://terpconnect.umd.edu/~jbklauda/research/download.html


Von:gmx-users-boun...@gromacs.org im Auftrag von albert
Gesendet: So 29.05.2011 21:23
An: Discussion list for GROMACS users
Betreff: Re:Re: [gmx-users] where can I download POPC membrane file?


But I don't think it is pre-equilibrium POPC membrane.. and more over, the 
position from VMD is not pre-aligned with OPM database. It would be a great 
problem for putting our protein in the membrane..







At 2011-05-30,"Sergio Manzetti"  wrote:
You can build it using VMD (VIsual Molecular Dynamics)




2011/5/30 albert
Dear all:

  I would like to use charmm36 and POPC for membrane protein simulation. and I 
am wondering where can I download charmm36 pre-pribriumed POPC PDB and topol 
file for gromacs?

Thank you very much
Best

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive athttp://www.gromacs.org/Support/Mailing_Lists/Search 
before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it togmx-users-requ...@gromacs.org.
Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re:Re: AW: Re: [gmx-users] where can I download POPC membrane file?

2011-06-01 Thread albert
Maybe you think it is a simple scrips, but for those fresh GROMACS user such as 
me, it would be a very difficult task.

Anyway, thanks a lot for the messages.




At 2011-06-01,"Thomas Piggot"  wrote:

>In addition to all the other responses, I just wanted to clear up why
>there is this difference in names. The POPC structure from the link
>below still has the atom names of the old CHARMM27 lipids (DPPC is
>fine). As suggested, a simple script can do the conversion for you.
>
>Cheers
>
>Tom
>
>albert wrote:
>> I know this, but this file cannot be used because the atom name is quite 
>> different from gromacs CHARMM36 topol library.
>> 
>> 
>> At 2011-05-30,"Rausch, Felix"  wrote:
>> 
>> Check this link given by another (unknown) mailing list user
>> yesterday (Topic name: *about POPC in Gromacs* )!
>>  
>> http://terpconnect.umd.edu/~jbklauda/research/download.html
>> <http://terpconnect.umd.edu/%7Ejbklauda/research/download.html>
>> 
>> ----
>> *Von:* gmx-users-boun...@gromacs.org
>> <mailto:gmx-users-boun...@gromacs.org> im Auftrag von albert
>> *Gesendet:* So 29.05.2011 21:23
>> *An:* Discussion list for GROMACS users
>> *Betreff:* Re:Re: [gmx-users] where can I download POPC membrane file?
>> 
>> But I don't think it is pre-equilibrium POPC membrane.. and more
>> over, the position from VMD is not pre-aligned with OPM database. It
>> would be a great problem for putting our protein in the membrane..
>> 
>> 
>> 
>> 
>> At 2011-05-30,"Sergio Manzetti" > <mailto:sergio.manze...@vestforsk.no>> wrote:
>> 
>> You can build it using VMD (VIsual Molecular Dynamics)
>> 
>> 
>> 
>> 2011/5/30 albert mailto:leu...@yeah.net>>
>> 
>> Dear all:
>> 
>>   I would like to use charmm36 and POPC for membrane protein
>> simulation. and I am wondering where can I download charmm36
>> pre-pribriumed POPC PDB and topol file for gromacs?
>> 
>> Thank you very much
>> Best
>> 
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> <mailto:gmx-users@gromacs.org>
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org
>> <mailto:gmx-users-requ...@gromacs.org>.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> 
>
>-- 
>Dr Thomas Piggot
>University of Southampton, UK.
>--
>gmx-users mailing listgmx-users@gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>Please don't post (un)subscribe requests to the list. Use the
>www interface or send it to gmx-users-requ...@gromacs.org.
>Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] strange problem with performance information

2012-01-05 Thread Albert

Hi:
  I am using the following command to submit gromacs md jobs in cluster:

mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args "-nosum -dlb yes 
-v -s nvt.tpr" -mode VN -np 128


Then I use command tail -f gromacs.out to check the performance of my 
jobs and I get the following information:


vol 0.38! imb F 11% pme/F 0.67 step 11200, will finish Thu Jan  5 
15:32:42 2012
vol 0.36! imb F 11% pme/F 0.66 step 11300, will finish Thu Jan  5 
15:32:42 2012
vol 0.38  imb F 11% pme/F 0.66 step 11400, will finish Thu Jan  5 
15:32:42 2012
vol 0.37! imb F 12% pme/F 0.66 step 11500, will finish Thu Jan  5 
15:32:42 2012


However, my current time is:

Thu Jan  5 15:56:17 CET 2012

As we can see, gromacs claimed that my job would be finished before my 
current time. Does anybody have any idea to fix this?


THX

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] another question about performance

2012-01-05 Thread Albert

Hello:
  here is my log file for mdrun:
Writing final coordinates.
step 10, remaining runtime: 0 s

 Average load imbalance: 10.8 %
 Part of the total run time spent waiting due to load imbalance: 4.3 %
 Steps where the load balancing was limited by -rdd, -rcon and/or -dds: 
X 0 % Y 19 % Z 0 %

 Average PME mesh/force load: 0.665
 Part of the total run time spent waiting due to PP/PME imbalance: 5.7 %

NOTE: 5.7 % performance was lost because the PME nodes
  had less work to do than the PP nodes.
  You might want to decrease the number of PME nodes
  or decrease the cut-off and the grid spacing.


NOTE: 9 % of the run time was spent communicating energies,
  you might want to use the -gcom option of mdrun


Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time:   2435.554   2435.554100.0
   40:35
   (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:409.701 22.103  7.095  3.383

gcq#149: "It's Against the Rules" (Pulp Fiction)



As we can see from the end of this log file, the performance is 
7.1ns/day, 3.4ns/hour. I am very confused about this output. How could 
this happen? Is there only two hours something each day?



THX
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] nodes error

2012-01-05 Thread Albert

Hello:
  I found that each time I would like to increase my nodes for MD run, 
my job always failed. it said:


Will use 192 particle-particle and 64 PME only nodes
This is a guess, check the performance at the end of the log file

---
Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/domdec.c, line: 6436

Fatal error:
There is no domain decomposition for 192 nodes that is compatible with 
the given box and a minimum cell size of

 1.02425 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS 
settings

Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Ohne Arbeit wird das Leben Oede" (Wir Sind Helden)


Does anybody have any idea for this? here is my scrips for submitting jobs;
# @ job_name = I213A
# @ class = kdm-large
# @ account_no = G07-13
# @ error = gromacs.err
# @ output = gromacs.out
# @ environment = COPY_ALL
# @ wall_clock_limit = 12:00:00
# @ notification = error
# @ notify_user = alb...@icm.edu.pl
# @ job_type = bluegene
# @ bg_size = 64
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args "-nosum -dlb yes 
-v -s npt.tpr" -mode VN -np 256



if I change the bg_size=32 and -np=128, everything goes well


THX

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] NPT error

2012-01-05 Thread Albert

Hi:

 I am following the tutorial:

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/07_equil2.html

the nvt step goes well, but the NPT always doesn't work. it said:


Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/domdec.c, line: 2633

Fatal error:
Step 2970: The domain decomposition grid has shifted too much in the 
Y-direction around cell 2 3 1


For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Ich Bin Ein Berliner" (J.F. Kennedy)

Error on node 81, will try to stop all the nodes
Halting parallel program mdrun_mpi_bg on CPU 81 out of 128

gcq#193: "Ich Bin Ein Berliner" (J.F. Kennedy)

Abort(-1) on node 81 (rank 81 in comm 1140850688): application called 
MPI_Abort(MPI_COMM_WORLD, -1) - process 8

1
 BE_MPI (ERROR): The error message in the job 
record is as follows:

 BE_MPI (ERROR):   "killed with signal 6"

here is my NPT.mdp file:

title= OPLS Lysozyme NPT equilibration
define= -DPOSRES; position restrain the protein
; Run parameters
integrator= md; leap-frog integrator
nsteps= 10; 2 * 10 = 200 ps
dt= 0.002; 1 fs
; Output control
nstxout= 100; save coordinates every 0.2 ps
nstvout= 100; save velocities every 0.2 ps
nstenergy= 100; save energies every 0.2 ps
nstlog= 100; update log file every 0.2 ps
; Bond parameters
continuation= yes; Restarting after NVT
constraint_algorithm = lincs; holonomic constraints
constraints= all-bonds; all bonds (even heavy atom-H bonds) 
constrained

lincs_iter= 1; accuracy of LINCS
lincs_order= 4; also related to accuracy
; Neighborsearching
ns_type= grid; search neighboring grid cells
nstlist= 5; 10 fs
rlist= 1.0; short-range neighborlist cutoff (in nm)
rcoulomb= 1.0; short-range electrostatic cutoff (in nm)
rvdw= 1.0; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype= PME; Particle Mesh Ewald for long-range 
electrostatics

pme_order= 4; cubic interpolation
fourierspacing= 0.16; grid spacing for FFT
; Temperature coupling is on
tcoupl= V-rescale; modified Berendsen thermostat
tc-grps= Protein Non-Protein; two coupling groups - more 
accurate

tau_t= 0.10.1; time constant, in ps
ref_t= 300 300; reference temperature, one for each 
group, in K

; Pressure coupling is on
pcoupl= Parrinello-Rahman; Pressure coupling on in NPT
pcoupltype= isotropic; uniform scaling of box vectors
tau_p= 2.0; time constant, in ps
ref_p= 1.0; reference pressure, in bar
compressibility = 4.5e-5; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc= xyz; 3-D PBC
; Dispersion correction
DispCorr= EnerPres; account for cut-off vdW scheme
; Velocity generation
gen_vel= no; Velocity generation is off
;warning
refcoord_scaling = all


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Re: nodes error

2012-01-05 Thread Albert

thank you very much for kind reply.

I change my command as following:

mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args "-nosum -dlb yes 
-v -s npt.tpr -nt 1" -mode VN -np 256


the "-nt 1" option has been added above. but it still doesn't work and 
here is the log file





Initializing Domain Decomposition on 256 nodes
Dynamic load balancing: yes
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 0.435 nm, LJ-14, atoms 1853 1861
  multi-body bonded interactions: 0.435 nm, Proper Dih., atoms 1853 1861
Minimum cell size due to bonded interactions: 0.478 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.819 nm
Estimated maximum distance required for P-LINCS: 0.819 nm
This distance will limit the DD cell size, you can override this with -rcon
Guess for relative PME load: 0.21
Will use 192 particle-particle and 64 PME only nodes
This is a guess, check the performance at the end of the log file
Using 64 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 192 cells with a minimum initial size of 1.024 nm
The maximum allowed number of cells is: X 7 Y 7 Z 7

---
Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/domdec.c, line: 6436

Fatal error:
There is no domain decomposition for 192 nodes that is compatible with 
the given box and a minimum cell size of

 1.02425 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS 
settings

Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"It's So Fast It's Slow" (F. Black)

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Re: nodes error

2012-01-06 Thread Albert


is there any solution to fix this?

THX


On 01/06/2012 09:52 AM, gmx-users-requ...@gromacs.org wrote:

"The minimum cell size is controlled by the size of the largest charge
group or bonded interaction and the largest of rvdw, rlist and rcoulomb,
some other effects of bond constraints, and a safety margin. *Thus it is
not possible to run a small simulation with large numbers of processors.*"

Based on the information you provided, the only thing I can say is, MAYBE
your system is "too small" to run with 256 processors.

Cheers

Terry


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] NPT error

2012-01-06 Thread Albert

Hi:

  I am following the tutorial:

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/07_equil2.html

the NVT step goes well, but the NPT always doesn't work. it said:


Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/domdec.c, line: 2633

Fatal error:
Step 2970: The domain decomposition grid has shifted too much in the
Y-direction around cell 2 3 1

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Ich Bin Ein Berliner" (J.F. Kennedy)

Error on node 81, will try to stop all the nodes
Halting parallel program mdrun_mpi_bg on CPU 81 out of 128

gcq#193: "Ich Bin Ein Berliner" (J.F. Kennedy)

Abort(-1) on node 81 (rank 81 in comm 1140850688): application called
MPI_Abort(MPI_COMM_WORLD, -1) - process 8
1
  BE_MPI (ERROR): The error message in the job
record is as follows:
  BE_MPI (ERROR):   "killed with signal 6"

here is my NPT.mdp file:

title= OPLS Lysozyme NPT equilibration
define= -DPOSRES; position restrain the protein
; Run parameters
integrator= md; leap-frog integrator
nsteps= 10; 2 * 10 = 200 ps
dt= 0.002; 1 fs
; Output control
nstxout= 100; save coordinates every 0.2 ps
nstvout= 100; save velocities every 0.2 ps
nstenergy= 100; save energies every 0.2 ps
nstlog= 100; update log file every 0.2 ps
; Bond parameters
continuation= yes; Restarting after NVT
constraint_algorithm = lincs; holonomic constraints
constraints= all-bonds; all bonds (even heavy atom-H bonds)
constrained
lincs_iter= 1; accuracy of LINCS
lincs_order= 4; also related to accuracy
; Neighborsearching
ns_type= grid; search neighboring grid cells
nstlist= 5; 10 fs
rlist= 1.0; short-range neighborlist cutoff (in nm)
rcoulomb= 1.0; short-range electrostatic cutoff (in nm)
rvdw= 1.0; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype= PME; Particle Mesh Ewald for long-range
electrostatics
pme_order= 4; cubic interpolation
fourierspacing= 0.16; grid spacing for FFT
; Temperature coupling is on
tcoupl= V-rescale; modified Berendsen thermostat
tc-grps= Protein Non-Protein; two coupling groups - more
accurate
tau_t= 0.10.1; time constant, in ps
ref_t= 300 300; reference temperature, one for each
group, in K
; Pressure coupling is on
pcoupl= Parrinello-Rahman; Pressure coupling on in NPT
pcoupltype= isotropic; uniform scaling of box vectors
tau_p= 2.0; time constant, in ps
ref_p= 1.0; reference pressure, in bar
compressibility = 4.5e-5; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc= xyz; 3-D PBC
; Dispersion correction
DispCorr= EnerPres; account for cut-off vdW scheme
; Velocity generation
gen_vel= no; Velocity generation is off
;warning
refcoord_scaling = all


The last option in npt.mdp file "refcoord_scaling = all" was added by myself 
otherwise there are some warnings.

Does anybody have any advices?

THX

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Nodes problem?

2012-01-06 Thread Albert

Hello:

  I am submiting gromacs in cluster and the job ALWAYS terminate with 
following messages:



vol 0.75  imb F  5% pme/F 0.52 step 4200, will finish Sat Jan  7 
09:36:14 2012
vol 0.77  imb F  6% pme/F 0.52 step 4300, will finish Sat Jan  7 
09:36:28 2012


step 4389: Water molecule starting at atom 42466 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.
Wrote pdb files with previous and current coordinates

step 4390: Water molecule starting at atom 42466 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.
Wrote pdb files with previous and current coordinates

step 4391: Water molecule starting at atom 41659 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

step 4391: Water molecule starting at atom 42385 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates

step 4392: Water molecule starting at atom 32218 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.
Wrote pdb files with previous and current coordinates

step 4393: Water molecule starting at atom 41659 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

step 4393: Water molecule starting at atom 32218 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates

---
Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/pme.c, line: 538

Fatal error:
3 particles communicated to PME node 4 are more than 2/3 times the 
cut-off out of the domain decomposition cell

 of their charge group in dimension x.
This usually means that your system is not well equilibrated.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"How Do You Like Your Vacation So Far ?" (Speed 2 - Cruise Control)

Error on node 19, will try to stop all the nodes
Halting parallel program mdrun_mpi_bg on CPU 19 out of 24

gcq#191: "How Do You Like Your Vacation So Far ?" (Speed 2 - Cruise Control)

Abort(-1) on node 19 (rank 19 in comm 1140850688): application called 
MPI_Abort(MPI_COMM_WORLD, -1) - process 1

9
 BE_MPI (ERROR): The error message in the job 
record is as follows:

 BE_MPI (ERROR):   "killed with signal 6"




---here is my scrips to submting jobs
# @ job_name = I213A
# @ class = kdm-large
# @ account_no = G07-13
# @ error = gromacs.out
# @ output = gromacs.out
# @ environment = COPY_ALL
# @ wall_clock_limit = 12:00:00
# @ notification = error
# @ notify_user = alb...@icm.edu.pl
# @ job_type = bluegene
# @ bg_size = 6
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args "-nosum -dlb yes 
-v -s npt.tpr" -mode VN -np 24



---here is my npt.mdp file
title= OPLS Lysozyme NPT equilibration
define= -DPOSRES; position restrain the protein
; Run parameters
integrator= md; leap-frog integrator
nsteps= 20; 1 * 20 = 200 ps
dt= 0.001; 1 fs
; Output control
nstxout= 100; save coordinates every 0.2 ps
nstvout= 100; save velocities every 0.2 ps
nstenergy= 100; save energies every 0.2 ps
nstlog= 100; update log file every 0.2 ps
; Bond parameters
continuation= yes; Restarting after NVT
constraint_algorithm = lincs; holonomic constraints
constraints= all-bonds; all bonds (even heavy atom-H bonds) 
constrained

lincs_iter= 1; accuracy of LINCS
lincs_order= 4; also related to accuracy
; Neighborsearching
ns_type= grid; search neighboring grid cells
nstlist= 5; 10 fs
rlist= 1.0; short-range neighborlist cutoff (in nm)
rcoulomb= 1.0; short-range electrostatic cutoff (in nm)
rvdw= 1.0; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype= PME; Particle Mesh Ewald for long-range 
electrostatics

pme_order= 4; cubic interpolation
fourierspacing= 0.16; grid spacing for FFT
; Temperature coupling is on
tcoupl= V-rescale; modified Berendsen thermostat
tc-grps= Protein Non-Protein; two coupling groups - more 
accurate

tau_t= 0.10.1; time constant, in ps
ref_t= 300 300; reference temperature, one for each 
group, in K

; Pressure coupling is on
pcoupl= Parrinello-Rahman; Pressure coupling on in NPT
pcoupltype= isotropic; uniform scaling of box vectors
tau_p= 2.0; time constant, in ps
ref_p 

[gmx-users] a question about membrane simulation

2012-02-06 Thread Albert
Dear:

   I am reading a membrane protein simulation paper by GROMACS which
published on PLOS Computational Biology (
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1001053
),
titled 'Predicting Novel Binding Modes of Agonists to β Adrenergic
Receptors Using All-Atom Molecular Dynamics Simulations', PLoS Comput Biol
7(1): e1001053. doi:10.1371/journal.pcbi.1001053. The author performed a
800 ns large scale MD simulation in GROMACS and obtained some residues'
conformations changes during the long time simulations.

   I found a very strange phenomena in this paper,i.e: in supplementary
figure S4,(
http://www.ploscompbiol.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pcbi.1001053.s007)
the author indicated us the chi angle change of a Phe. As we can see
from
the plot that the conformation of this residue in 40-50 ns is the same with
that at 700-800 ns which has been conclude to be something active. So, I am
wondering, does such phenomena frequently happen in general membrane
protein simulation? Why this active conformation absent during 60-700 ns
period and present again in 700-800 time scale level? Can we also expect
that around 50 ns time scale level, agonist bound GPCR should also expect
such kind of side chain switches and it may lost at 100 ns time scale?

  I would be very appreciated if someone could give me some comments on
this issue.

Thank you very much

best wishes
Albert
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] a question about membrane simulation

2012-02-06 Thread Albert
Dear:

   I am reading a membrane protein simulation paper by GROMACS which
published on PLOS Computational Biology (
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1001053
),
titled 'Predicting Novel Binding Modes of Agonists to β Adrenergic
Receptors Using All-Atom Molecular Dynamics Simulations', PLoS Comput Biol
7(1): e1001053. doi:10.1371/journal.pcbi.1001053. The author performed a
800 ns large scale MD simulation in GROMACS and obtained some residues'
conformations changes during the long time simulations.

   I found a very strange phenomena in this paper,i.e: in supplementary
figure S4,(
http://www.ploscompbiol.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pcbi.1001053.s007)
the author indicated us the chi angle change of a Phe. As we can see
from
the plot that the conformation of this residue in 40-50 ns is the same with
that at 700-800 ns which has been conclude to be something active. So, I am
wondering, does such phenomena frequently happen in general membrane
protein simulation? Why this active conformation absent during 60-700 ns
period and present again in 700-800 time scale level? Can we also expect
that around 50 ns time scale level, agonist bound GPCR should also expect
such kind of side chain switches and it may lost at 100 ns time scale?

  I would be very appreciated if someone could give me some comments on
this issue.

Thank you very much

best wishes
Albert
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] dssp error

2012-03-27 Thread Albert
hello:
  I am trying to run do_dssp by command:

do_dssp -s md.tpr -f md.trr -b 400 -e 500 -o fws_ss.xpm

but it said:

Select a group: 1
Selected 1: 'Protein'
There are 35 residues in your selected group
trn version: GMX_trn_file (single precision)
Reading frame 400 time  400.000
Back Off! I just backed up ddbXn2WY to ./#ddbXn2WY.1#

---
Program do_dssp, VERSION 4.5.5
Source code file: do_dssp.c, line: 572

Fatal error:
Failed to execute command: /usr/local/bin/dssp -na ddbXn2WY ddeWknkk >
/dev/null 2> /dev/null
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"It's just the way this stuff is done" (Built to Spill)
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] 200 CPU, 3ns/day for 80,000 atoms !!!!

2012-03-28 Thread Albert
Dear:

  I am using gromacs for membrane simulation (under CHARMM36 FF) which
contains around 80,000 atoms. I've submitted over 200 CPU in the cluster
for such system with 2 fs time step. And what really astonished is that the
efficiency for such simulation is only 3ns/day. I am wondering what
happen to my system or gromacs? What can I do to fasten the simulation?

here is my md.mdp:
*
title= god!
cpp  = /usr/bin/cpp
include  =
define   =
integrator   = md
dt   = 0.001
nsteps   = 1
nstxout  = 100
nstvout  = 100
nstlog   = 100
nstenergy= 1
nstxtcout= 10
xtc_grps =
energygrps = Protein POPC SOL ION
nstcalcenergy= 1
nstlist  = 1
nstcomm  = 1
comm_mode= Linear
comm-grps= Protein_POPCWater_and_ions
ns_type  = grid
rlist= 1.2
rlistlong = 1.4
vdwtype = Switch
rvdw = 1.2
rvdw_switch = 0.8
coulombtype  = pme
rcoulomb = 1.2
rcoulomb_switch = 0.0
fourierspacing = 0.15
pme_order = 4
DispCorr = no
tcoupl   = nose-hoover
nhchainlength= 1
tc-grps  = Protein_POPCWater_and_ions
tau_t= 0.50.5
ref_t= 310 310
Pcoupl   = parrinello-rahman
Pcoupltype   = semiisotropic
tau_p= 5.0
compressibility  = 4.5e-5   4.5e-5
ref_p= 1.0  1.0
pbc = xyz
gen_vel  = no
optimize_fft = no
constraints  = hbonds
constraint_algorithm = Lincs
*


Thank you very much

best
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] how to optimize hydrogen bonds before simulation?

2012-03-29 Thread Albert

Hello:

  I am wondering is it possible to optimize hydrogen bonds network 
before simulation? I've got some crystal solvent in the system and I 
would like to optimize the hbond network even before building a solvent 
system.


THX
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Re: do_dssp error

2012-03-30 Thread Albert

Hello:
  there is some problem for my do_dssp, it always claimed:

Program do_dssp_d, VERSION 4.5.5
Source code file: do_dssp.c, line: 572
Fatal error:
Failed to execute command: /usr/local/bin/dssp -na ddg1g7Id ddRwthIi > 
/dev/null 2> /dev/null



someone suggest to compile gromacs with double precision and I 
recompiled it with --enable-double

but it still doesn't work.

does anybody else have any suggestions?

THX
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: gmx-users Digest, Vol 95, Issue 208

2012-03-30 Thread Albert

I've tried different kind of DSSP version, but the problem is the same


On 03/30/2012 02:54 PM, gmx-users-requ...@gromacs.org wrote:

Send gmx-users mailing list submissions to
gmx-users@gromacs.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.gromacs.org/mailman/listinfo/gmx-users
or, via email, send a message with subject or body 'help' to
gmx-users-requ...@gromacs.org

You can reach the person managing the list at
gmx-users-ow...@gromacs.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gmx-users digest..."


Today's Topics:

1. Re: Re: do_dssp error (Erik Marklund)
2. Re: Not able to continue with Equilibration (Justin A. Lemkul)
3. Re: HB lifetime (Nidhi Katyal)
4. Re: HB lifetime (Justin A. Lemkul)


--

Message: 1
Date: Fri, 30 Mar 2012 13:28:05 +0200
From: Erik Marklund
Subject: Re: [gmx-users] Re: do_dssp error
To: Discussion list for GROMACS users
Message-ID:<129a1e28-abbd-4733-ba85-c1f6b2f4b...@xray.bmc.uu.se>
Content-Type: text/plain; charset="us-ascii"

And I replied "What's your dssp version? The most recent ones have different flags 
that are not yet supported by gromacs."

Erik

30 mar 2012 kl. 13.23 skrev Albert:


Hello:
  there is some problem for my do_dssp, it always claimed:

Program do_dssp_d, VERSION 4.5.5
Source code file: do_dssp.c, line: 572
Fatal error:
Failed to execute command: /usr/local/bin/dssp -na ddg1g7Id ddRwthIi>  /dev/null 
2>  /dev/null


someone suggest to compile gromacs with double precision and I recompiled it 
with --enable-double
but it still doesn't work.

does anybody else have any suggestions?

THX
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or 
send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

---
Erik Marklund, PhD
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 6688fax: +46 18 511 755
er...@xray.bmc.uu.se
http://www2.icm.uu.se/molbio/elflab/index.html

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.gromacs.org/pipermail/gmx-users/attachments/20120330/7852dac3/attachment-0001.html

--

Message: 2
Date: Fri, 30 Mar 2012 08:26:05 -0400
From: "Justin A. Lemkul"
Subject: Re: [gmx-users] Not able to continue with Equilibration
To: Discussion list for GROMACS users
Message-ID:<4f75a65d.1040...@vt.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



francesca vitalini wrote:

Dear Mark,
Thank you for your answer. I'm trying now with the position restraints
and see what happens.
However, another question came up to my mind in the mean time. I'm
using GROMACS 3.3.1 (version with mapping for reverse transformation,
I have been posting on it before) and for the mdrun the flag -coarse
is required. From the mdrun -h help, the -coarse flag resulted to be a
generic trajectory, so I assumed it was needed for the names of the
atoms or something similar. However, trying to run the nvt.mdp with
the original coarse grained file specified for this flag, resulted in
the simulation not dying at the very first steps.
  I'm not sure if it will work out eventually ( the 20 ps simulations
is supposed to finish in 12 hours, which is still kind of worrying
despite my system being pretty big) but definitely told me that the
-coarse flag might be of fundamental importance.
Unfortunately I couldn't find any more detailed documentation about it.
Could anyone explain to me what it does or point me to where to find
the related documentation?

Where did you obtain this version of Gromacs?  You're not using an official
version, so you're not likely to find its documentation in the usual places and
it's very hard for this community to help you using a modified version of
antiquated software.  You're more likely to have luck contacting whoever created
these modifications for help, since potential problems with code stability and
performance are likely best addressed by those who made the modifications.
Perhaps they are members of this list, but contacting them directly is probably
a better approach.

-Justin



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: gmx-users Digest, Vol 95, Issue 208

2012-03-30 Thread Albert



I've tried: 2, 2.1.3, 2.1.4, the problem is still there. I don't think 
the do_dssp is so difficult. just one command:


do_dssp -s md.tpr -f md.trr -b 100 -e 200 -f ss.xpm


On 03/30/2012 03:05 PM, Erik Marklund wrote:

And what versions were those?

30 mar 2012 kl. 15.06 skrev Albert:

I've tried different kind of DSSP version, but the problem is the 
same



On 03/30/2012 02:54 PM, gmx-users-requ...@gromacs.org 
<mailto:gmx-users-requ...@gromacs.org> wrote:

Send gmx-users mailing list submissions to
gmx-users@gromacs.org <mailto:gmx-users@gromacs.org>

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.gromacs.org/mailman/listinfo/gmx-users
or, via email, send a message with subject or body 'help' to
gmx-users-requ...@gromacs.org <mailto:gmx-users-requ...@gromacs.org>

You can reach the person managing the list at
gmx-users-ow...@gromacs.org <mailto:gmx-users-ow...@gromacs.org>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gmx-users digest..."


Today's Topics:

   1. Re: Re: do_dssp error (Erik Marklund)
   2. Re: Not able to continue with Equilibration (Justin A. Lemkul)
   3. Re: HB lifetime (Nidhi Katyal)
   4. Re: HB lifetime (Justin A. Lemkul)


--

Message: 1
Date: Fri, 30 Mar 2012 13:28:05 +0200
From: Erik Marklundmailto:er...@xray.bmc.uu.se>>
Subject: Re: [gmx-users] Re: do_dssp error
To: Discussion list for GROMACS users<mailto:gmx-users@gromacs.org>>
Message-ID:<129a1e28-abbd-4733-ba85-c1f6b2f4b...@xray.bmc.uu.se 
<mailto:129a1e28-abbd-4733-ba85-c1f6b2f4b...@xray.bmc.uu.se>>

Content-Type: text/plain; charset="us-ascii"

And I replied "What's your dssp version? The most recent ones have 
different flags that are not yet supported by gromacs."


Erik

30 mar 2012 kl. 13.23 skrev Albert:


Hello:
 there is some problem for my do_dssp, it always claimed:

Program do_dssp_d, VERSION 4.5.5
Source code file: do_dssp.c, line: 572
Fatal error:
Failed to execute command: /usr/local/bin/dssp -na ddg1g7Id 
ddRwthIi>  /dev/null 2>  /dev/null



someone suggest to compile gromacs with double precision and I 
recompiled it with --enable-double

but it still doesn't work.

does anybody else have any suggestions?

THX
--
gmx-users mailing list gmx-users@gromacs.org 
<mailto:gmx-users@gromacs.org>

http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www 
interface or send it to gmx-users-requ...@gromacs.org 
<mailto:gmx-users-requ...@gromacs.org>.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

---
Erik Marklund, PhD
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 6688fax: +46 18 511 755
er...@xray.bmc.uu.se <mailto:er...@xray.bmc.uu.se>
http://www2.icm.uu.se/molbio/elflab/index.html

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.gromacs.org/pipermail/gmx-users/attachments/20120330/7852dac3/attachment-0001.html


--

Message: 2
Date: Fri, 30 Mar 2012 08:26:05 -0400
From: "Justin A. Lemkul"mailto:jalem...@vt.edu>>
Subject: Re: [gmx-users] Not able to continue with Equilibration
To: Discussion list for GROMACS users<mailto:gmx-users@gromacs.org>>

Message-ID:<4f75a65d.1040...@vt.edu <mailto:4f75a65d.1040...@vt.edu>>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



francesca vitalini wrote:

Dear Mark,
Thank you for your answer. I'm trying now with the position restraints
and see what happens.
However, another question came up to my mind in the mean time. I'm
using GROMACS 3.3.1 (version with mapping for reverse transformation,
I have been posting on it before) and for the mdrun the flag -coarse
is required. From the mdrun -h help, the -coarse flag resulted to be a
generic trajectory, so I assumed it was needed for the names of the
atoms or something similar. However, trying to run the nvt.mdp with
the original coarse grained file specified for this flag, resulted in
the simulation not dying at the very first steps.
 I'm not sure if it will work out eventually ( the 20 ps simulations
is supposed to finish in 12 hours, which is still kind of worrying
despite my system being pretty big) but definitely told me that the
-coarse flag might be of fundamental importance.
Unfortunately I couldn't find any more detailed documentation about it.
Could anyone explain to me what it does or point me to where to find
the related documentation?
Where did you obtain this version of Gro

[gmx-users] large scale simulation?

2012-03-30 Thread Albert

Hello:

  I am wondering does anybody have experience with GROMACS for large 
scale simulation? I've heard lot of people said that it would be 
difficult for Gromacs to do so. eg: I've got a 60,000 atoms system, is 
it possible for GROMACS to produce 100 ns/days or even more? suppose I 
can use as much CPU as possible My recent experience for such 
system, Gromacs can only produce up to 20ns/day If I would like to 
produce 1000 ns, I have to wait for 50 days..


thank you very much

best
A.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Re: large scale simulation?

2012-03-30 Thread Albert

Hi guys:

  thank you very much for your kind comments. Probably the most 
effective way is to optimize PME balance as Mark mentioned.  It seems 
that Mark's methods improved much much better for the speed.
  If possible, could Mark share your experience how did you optimize 
the PME balance in Gromacs? Probably each of us can learn a lot from you.


thank you very much
best
Albert
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] g_tune_pme error in blue gene

2012-03-31 Thread Albert

Hello:

  I am trying to run g_tune_pme in blue gene with following script:

# @ job_name = bm
# @ class = kdm-large
# @ account_no = G07-13
# @ error = gromacs.info
# @ output = gromacs.out
# @ environment = COPY_ALL
# @ wall_clock_limit = 160:00:00
# @ notification = error
# @ job_type = bluegene
# @ bg_size = 64
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/g_tune_pme -args "-v -s md.tpr -o 
bm.trr -cpo bm.cpt -g bm.log -launch" -mode VN -np 256


but I've got the following messages as soon as I submit jobs and it 
terminate soon:


---gromacs.info--
 BE_MPI (ERROR): Job execution failed
 BE_MPI (ERROR): Job 10969 is in state ERROR ('E')
 FE_MPI (ERROR): Job execution failed (error 
code - 50)
 FE_MPI (ERROR):  - Job execution failed - job 
switched to an error state
 BE_MPI (ERROR): The error message in the job 
record is as follows:
 BE_MPI (ERROR):   "Load failed on 
192.168.101.49: Executable file is not a 32-bit ELF file"

 FE_MPI (ERROR): Failure list:
 FE_MPI (ERROR):   - 1. Job execution failed - 
job switched to an error state (failure #50)



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] another g_tune_pme problem

2012-04-01 Thread Albert

Hello:
  I am trying to test g_tune_pme in workstation by command:

g_tune_pme_d -v -s md.tpr -o bm.trr -cpi md.cpt -cpo bm.cpt -g bm.log 
-launch -nt 16 &


but it stopped immediately with following logs. I complied gromacs with 
a -d in each module such as mdrun_d and I aliased mdrun_d to mdrun in 
the shell. However, my g_tune_pme still claimed that it cannot execute 
md_run..


thank you very much


--log--
back Off! I just backed up perf.out to ./#perf.out.5#
Will test 3 tpr files.
Will try runs with 4 - 8 PME-only nodes.
  Note that the automatic number of PME-only nodes and no separate PME 
nodes are always tested.


Back Off! I just backed up benchtest.log to ./#benchtest.log.5#

---
Program g_tune_pme_d, VERSION 4.5.5
Source code file: gmx_tune_pme.c, line: 631

Fatal error:
Cannot execute mdrun. Please check benchtest.log for problems!
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Once Again Let Me Do This" (Urban Dance Squad)

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] g_cluster for concoord output?

2012-04-02 Thread Albert

Dear:

I've generated a disoc.pdb file by concoord and does any one have 
any idea how to analyze it by Gromacs g_cluster? when I read the manual 
of g_cluter, it will require


  -f   traj.xtc  Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt
  -s  topol.tpr  Input, Opt.  Structure+mass(db): tpr tpb tpa gro 
g96 pdb

  -n  index.ndx  Input, Opt.  Index file
 -dm   rmsd.xpm  Input, Opt.  X PixMap compatible matrix file


but the concoord output is a single pdb file.

thank you very much

best wishes
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] jobs failed

2012-04-05 Thread Albert

Hello:
  I am using the following script to run Gromacs in cluster, but it failed:

# @ job_name = bm
# @ class = kdm-large
# @ error = gromacs.info
# @ output = gromacs.out
# @ environment = COPY_ALL
# @ wall_clock_limit = 10:00:00
# @ notification = error
# @ job_type = bluegene
# @ bg_size = 64
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args "-v -s md.tpr -o 
md.trr -cpo md.cpt -c md.gro -g md-out.log -launch" -mode VN -np 256



and here is the log file


Back Off! I just backed up md-out.log to ./#md-out.log.1#
Getting Loaded...
Reading file md.tpr, VERSION 4.5.5 (single precision)
Loaded with Money


Will use 192 particle-particle and 64 PME only nodes
This is a guess, check the performance at the end of the log file
Making 3D domain decomposition 8 x 4 x 6

Back Off! I just backed up md.trr to ./#md.trr.2#

Back Off! I just backed up traj.xtc to ./#traj.xtc.3#

Back Off! I just backed up ener.edr to ./#ener.edr.3#

WARNING: This run will generate roughly 3302 Mb of data

starting mdrun 'BmEH-complex-POA in water'
5000 steps, 10.0 ps.
step 0

NOTE: Turning on dynamic load balancing

vol 0.41  imb F 18% pme/F 0.61 step 100, will finish Tue Apr 17 13:49:51 
2012
vol 0.42  imb F 12% pme/F 0.60 step 200, will finish Sun Apr 15 23:46:30 
2012
vol 0.44  imb F 12% pme/F 0.57 step 300, will finish Sun Apr 15 12:20:49 
2012
vol 0.45  imb F 12% pme/F 0.58 step 400, will finish Sun Apr 15 07:01:25 
2012
vol 0.48  imb F 12% pme/F 0.57 step 500, will finish Sun Apr 15 03:46:13 
2012
vol 0.49! imb F 11% pme/F 0.57 step 600, will finish Sun Apr 15 01:43:05 
2012
vol 0.46! imb F 10% pme/F 0.59 step 700, will finish Sun Apr 15 00:01:14 
2012
vol 0.42! imb F 10% pme/F 0.58 step 800, will finish Sat Apr 14 22:56:06 
2012
vol 0.45! imb F 12% pme/F 0.56 step 900, will finish Sat Apr 14 22:16:49 
2012
vol 0.46! imb F 10% pme/F 0.57 step 1000, will finish Sat Apr 14 
21:49:10 2012
vol 0.46! imb F  9% pme/F 0.58 step 1100, will finish Sat Apr 14 
21:26:04 2012
vol 0.47! imb F 10% pme/F 0.57 step 1200, will finish Sat Apr 14 
21:02:35 2012
vol 0.45  imb F  9% pme/F 0.58 step 1300, will finish Sat Apr 14 
20:34:22 2012
vol 0.45! imb F  9% pme/F 0.58 step 1400, will finish Sat Apr 14 
20:15:54 2012
vol 0.48! imb F 11% pme/F 0.57 step 1500, will finish Sat Apr 14 
20:07:48 2012
vol 0.47! imb F 10% pme/F 0.58 step 1600, will finish Sat Apr 14 
19:57:46 2012
vol 0.47! imb F 13% pme/F 0.58 step 1700, will finish Sat Apr 14 
19:51:47 2012
vol 0.45! imb F 11% pme/F 0.58 step 1800, will finish Sat Apr 14 
19:44:37 2012
vol 0.46! imb F 13% pme/F 0.57 step 1900, will finish Sat Apr 14 
19:37:10 2012
vol 0.50! imb F 12% pme/F 0.58 step 2000, will finish Sat Apr 14 
19:29:20 2012
vol 0.50! imb F 12% pme/F 0.58 step 2100, will finish Sat Apr 14 
19:23:00 2012
vol 0.48  imb F 10% pme/F 0.57 step 2200, will finish Sat Apr 14 
19:15:43 2012
vol 0.50! imb F 11% pme/F 0.57 step 2300, will finish Sat Apr 14 
19:13:30 2012
vol 0.49! imb F 11% pme/F 0.57 step 2400, will finish Sat Apr 14 
19:10:14 2012
vol 0.48  imb F 10% pme/F 0.58 step 2500, will finish Sat Apr 14 
19:01:51 2012
vol 0.47! imb F 12% pme/F 0.58 step 2600, will finish Sat Apr 14 
18:55:11 2012
vol 0.48! imb F 11% pme/F 0.58 step 2700, will finish Sat Apr 14 
18:49:47 2012
vol 0.46! imb F 12% pme/F 0.58 step 2800, will finish Sat Apr 14 
18:45:32 2012


---
Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/domdec.c, line: 2633

Fatal error:
Step 2850: The domain decomposition grid has shifted too much in the 
Z-direction around cell 5 0 2


For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Don't Push Me, Cause I'm Close to the Edge" (Tricky)

Error on node 162, will try to stop all the nodes
Halting parallel program mdrun_mpi_bg on CPU 162 out of 256

gcq#8: "Don't Push Me, Cause I'm Close to the Edge" (Tricky)

Abort(-1) on node 162 (rank 162 in comm 1140850688): application called 
MPI_Abort(MPI_COMM_WORLD, -1) - process 162
 BE_MPI (ERROR): The error message in the job 
record is as follows:

 BE_MPI (ERROR):   "killed with signal 6"

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: job failed

2012-04-05 Thread Albert

Hello:
  thank you very much for kind reply.
  I tried NVT before I produced NPT MD production.

thank you very much
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Re: job failed

2012-04-05 Thread Albert

hello:

  thank you very much for your kind messages. I first did minimization, 
the NVP with gradually heating the system from 0-310K, and them NPT 
production:


NVT.mpd---
define = -DPOSRES  -DPOSRES_LIG
constraints= hbonds
integrator= md
dt= 0.001 ; ps !
nsteps= 300 ; total 3000.0 ps.
nstcomm= 10
nstxout= 5000 ; collect data every 1.0 ps
nstxtcout= 5000
nstvout= 5000
nstfout= 0
nstlog= 10
nstenergy= 50
nstlist= 10
ns_type= grid
rlist= 1.2
coulombtype= PME
rcoulomb= 1.2
vdwtype= cut-off
rvdw= 1.4
pme_order= 4
ewald_rtol= 1e-5
optimize_fft= yes
DispCorr= no
; Berendsen temperature coupling is on
Tcoupl= v-rescale
tau_t= 0.10.1
tc-grps= protein non-protein
ref_t= 310310
; Pressure coupling is off
Pcoupl= no
Pcoupltype= isotropic
tau_p= 1.0
compressibility= 4.5e-5
ref_p= 1.0
pbc = xyz
annealing = singlesingle
annealing_npoints = 22
annealing_time = 0 50000 5000
annealing_temp = 0 3100 310
gen_vel  = no
constraint_algorithm = Lincs


---NPT md.mdp--
constraints= hbonds
integrator= md
dt= 0.002 ; ps !
nsteps= 500 ; total 10ns.
nstcomm= 10
nstxout= 2 ; collect data every
nstenergy   = 2
nstxtcout   = 2
nstvout= 0
nstfout= 0
nstlist= 10
ns_type= grid
rlist= 1.2
coulombtype= PME
rcoulomb= 1.2
vdwtype= cut-off
rvdw= 1.4
pme_order= 4
ewald_rtol= 1e-5
optimize_fft= yes
DispCorr= no
; Berendsen temperature coupling is on
Tcoupl= v-rescale
tau_t= 0.10.1
tc-grps= proteinnon-protein
ref_t= 310310
; Pressure coupling is on
Pcoupl= parrinello-rahman
Pcoupltype= isotropic
tau_p= 1.0
compressibility= 4.5e-5
ref_p= 1.0
; Generate velocites is on at 310 K.
gen_vel= yes
gen_temp= 310.0
gen_seed= 

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] where can we obtain the latest Amber ff12SB FF?

2012-04-21 Thread Albert

hello:
  I am wondering where can we obtain the latest Amber ff12SB FF for 
Gromacs?


thx
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] why it is so slow in Blue gene?

2012-04-24 Thread Albert

hello:

  I am running a 60,000 atom system with 128 core in a blue gene 
cluster. and it is only 1ns/day here is the script I used for 
submitting jobs:


# @ job_name = gmx_test
# @ class = kdm-large
# @ error = gmx_test.err
# @ output = gmx_test.out
# @ wall_clock_limit = 00:20:00
# @ job_type = bluegene
# @ bg_size = 32
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args "-nosum -dlb yes 
-v -s npt
_01.tpr -o npt_01.trr -cpo npt_01.cpt -g npt_01.log -launch -nt" -mode 
VN -np 128




here is my npt.mdp

title= NPT-01
cpp  = /usr/bin/cpp
include  =
define = -DPOSRES  -DPOSRES_POPE_HEAD
integrator   = md
dt   = 0.001
nsteps   = 500
nstxout  = 10
nstvout  = 10
nstlog   = 10
nstenergy= 5
nstxtcout= 5
xtc_grps =
energygrps = Protein SOL ION
nstcalcenergy= 10
nstlist  = 10
nstcomm  = 10
comm_mode= Linear
comm-grps= Protein_POPEWater_and_ions
ns_type  = grid
rlist= 1.2
rlistlong = 1.4
vdwtype = Switch
rvdw = 1.2
rvdw_switch = 0.8
coulombtype  = pme
rcoulomb = 1.2
rcoulomb_switch = 0.0
fourierspacing = 0.15
pme_order = 6
DispCorr = no
tcoupl   = V-rescale ;nose-hoover
nhchainlength= 1
tc-grps  = Protein_POPEWater_and_ions
tau_t= 0.1   0.1
ref_t= 310 310
Pcoupl   = berendsen;parrinello-rahman
Pcoupltype   = semiisotropic
tau_p= 1.0
compressibility  = 4.5e-5   4.5e-5
ref_p= 1.0  1.0
pbc = xyz
refcoord_scaling = com
gen_vel  = no
optimize_fft = no
constraints  = hbonds
constraint_algorithm = Lincs


Does anybody have any advices?

thank you very much
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: why it is so slow in Blue gene

2012-04-24 Thread Albert

hello guys:

   thank you very much for kind reply.

   I am not sure about the thread version since there is only bigsize 
version of it. And the administrator told me that I have to use the 
multiples of 32 in the bg_size parameter. The number specified in "-np" 
should be 4 times bg_size.
  Probably I should compile it by myself. Does anybody have any idea 
how to compile gromacs in blue gene?


thank you very much
Albert



On 04/25/2012 12:31 AM, Dr. Vitaly V. Chaban wrote:

hello:

   I am running a 60,000 atom system with 128 core in a blue gene
cluster. and it is only 1ns/day here is the script I used for
submitting jobs:

# @ job_name = gmx_test
# @ class = kdm-large
# @ error = gmx_test.err
# @ output = gmx_test.out
# @ wall_clock_limit = 00:20:00
# @ job_type = bluegene
# @ bg_size = 32
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args "-nosum -dlb yes
-v -s npt
_01.tpr -o npt_01.trr -cpo npt_01.cpt -g npt_01.log -launch -nt" -mode
VN -np 128



here is my npt.mdp

title= NPT-01
cpp  = /usr/bin/cpp
include  =
define = -DPOSRES  -DPOSRES_POPE_HEAD
integrator   = md
dt   = 0.001
nsteps   = 500
nstxout  = 10
nstvout  = 10
nstlog   = 10
nstenergy= 5
nstxtcout= 5
xtc_grps =
energygrps = Protein SOL ION
nstcalcenergy= 10
nstlist  = 10
nstcomm  = 10
comm_mode= Linear
comm-grps= Protein_POPEWater_and_ions
ns_type  = grid
rlist= 1.2
rlistlong = 1.4
vdwtype = Switch
rvdw = 1.2
rvdw_switch = 0.8
coulombtype  = pme
rcoulomb = 1.2
rcoulomb_switch = 0.0
fourierspacing = 0.15
pme_order = 6
DispCorr = no
tcoupl   = V-rescale ;nose-hoover
nhchainlength= 1
tc-grps  = Protein_POPEWater_and_ions
tau_t= 0.1   0.1
ref_t= 310 310
Pcoupl   = berendsen;parrinello-rahman
Pcoupltype   = semiisotropic
tau_p= 1.0
compressibility  = 4.5e-5   4.5e-5
ref_p= 1.0  1.0
pbc = xyz
refcoord_scaling = com
gen_vel  = no
optimize_fft = no
constraints  = hbonds
constraint_algorithm = Lincs

Does anybody have any advices?

Albert -

What is the speed using serial gromacs for the same system?


Dr. Vitaly V. Chaban, 430 Hutchison Hall
Dept. Chemistry, University of Rochester
120 Trustee Road, Rochester, NY 14627-0216
THE UNITED STATES OF AMERICA


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why it is so slow in Blue gene?

2012-04-24 Thread Albert

hello:

  it is blue gene P. And the gromacs is single precision in the 
cluster. Getting Loaded...And the administrator told me that I have to 
use the multiples of 32 in the bg_size parameter. The number specified 
in "-np" should be 4 times bg_size.

  It is even slower than my own workstation with 16 core.




here is the log file I get:

-log
Reading file npt_01.tpr, VERSION 4.5.5 (single precision)
Loaded with Money

Will use 112 particle-particle and 16 PME only nodes
This is a guess, check the performance at the end of the log file
Making 3D domain decomposition 4 x 4 x 7
starting mdrun 'GRowing Old MAkes el Chrono Sweat'
50 steps,500.0 ps.
step 0
vol 0.64! imb F 16% pme/F 0.22 step 100, will finish Wed Apr 25 18:28:06 
2012
vol 0.65! imb F 17% pme/F 0.21 step 200, will finish Wed Apr 25 18:09:54 
2012
vol 0.67! imb F 18% pme/F 0.21 step 300, will finish Wed Apr 25 18:03:12 
2012
vol 0.69! imb F 18% pme/F 0.21 step 400, will finish Wed Apr 25 17:58:25 
2012
vol 0.67! imb F 19% pme/F 0.21 step 500, will finish Wed Apr 25 17:55:26 
2012
vol 0.68! imb F 19% pme/F 0.22 step 600, will finish Wed Apr 25 17:53:31 
2012
vol 0.68! imb F 19% pme/F 0.22 step 700, will finish Wed Apr 25 17:51:57 
2012
vol 0.68! imb F 19% pme/F 0.22 step 800, will finish Wed Apr 25 17:50:32 
2012
vol 0.68! imb F 20% pme/F 0.22 step 900, will finish Wed Apr 25 17:49:14 
2012
vol 0.67! imb F 21% pme/F 0.22 step 1000, will finish Wed Apr 25 
17:48:13 2012
vol 0.68! imb F 20% pme/F 0.22 step 1100, will finish Wed Apr 25 
17:47:28 2012
vol 0.67! imb F 21% pme/F 0.22 step 1200, will finish Wed Apr 25 
17:46:50 2012
vol 0.67! imb F 21% pme/F 0.22 step 1300, will finish Wed Apr 25 
17:46:15 2012




On 04/24/2012 06:01 PM, Hannes Loeffler wrote:

On Tue, 24 Apr 2012 15:42:15 +0200
Albert  wrote:


hello:

I am running a 60,000 atom system with 128 core in a blue gene
cluster. and it is only 1ns/day here is the script I used for

You don't give any information what exact system that is (L/P/Q?), if
you run single or double precision and what force field you are using.
But for a similar sized system using a united atom force field in
single precision we find about 4 ns/day on a BlueGene/P (see our
benchmarking reports on
http://www.stfc.ac.uk/CSE/randd/cbg/Benchmark/25241.aspx).  I would
expect a run with the CHARMM 27 force field in double precision to be
roughly 3 times slower.  We found scaling to 128 cores to be
reasonably good. Also, check our report for problems when compiling
with higher optimisation.

Hannes.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] a question about energygrps

2012-04-25 Thread Albert

Hello:

  I am running a membrane simulation with gromacs and I wondering how 
to deal with energygrps? Should I put protein and lipids into one 
energygrps? Or I should leave the lipids stay with solvent and ions?


thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] a question about energygrps

2012-04-25 Thread Albert

hello Justin:
  thank you very much for kind reply.
  In gromacs  tutorial I found that the author use the following paramters:

tc-grps= Protein DPPC SOL_CL; three coupling groups - more 
accurate

comm-grps= Protein_DPPC SOL_CL


do you have any idea why did he use different groups for above paramters 
in NVT.mdp?


thank you very much


On 04/25/2012 04:13 PM, Justin A. Lemkul wrote:



On 4/25/12 10:07 AM, Albert wrote:

Hello:

I am running a membrane simulation with gromacs and I wondering how 
to deal with
energygrps? Should I put protein and lipids into one energygrps? Or I 
should

leave the lipids stay with solvent and ions?



You can divide the system in any way you like for energygrps; it's 
just a decomposition of the nonbonded terms.  The way your break it 
down depends on what you care to measure in the system.


-Justin



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] a question about energygrps

2012-04-25 Thread Albert

hello Justin:

  thank you very much for your such kind explanations. It is quite helpful.

  I am wondering how many CPU did you usually use for the membrane 
simulations? and how many ns/day can you get the best if CPU is not a 
problem with Gromacs? I am very confused about this question since some 
one claimed that they can use optimized PMF/F paramters to achieve even 
100ns/day for 20,000 atoms system. I also try to use g_tune_pme to get a 
better performance, but the results are not so satisfied.. Can you 
give me some advices for this?


thank you very much.

best
Albert


On 04/25/2012 04:34 PM, Justin A. Lemkul wrote:

Indeed I do - I wrote it ;)

Ideally, one would like to use a single thermostat for the whole 
system, but in practice, especially for heterogeneous systems, the 
heat exchange between different groups can be different.  Hence the 
protein, lipids, and aqueous solvent are coupled separately.  They are 
of sufficient size to justify their own group (note, for instance, 
that ions are not coupled separately).


With respect to the comm-grps, since the system is basically an 
interface, the lipids and aqueous layers can slide with respect to one 
another, generating no net COM motion, but each layer may actually be 
have a net velocity, which would lead to artifacts.


-Justin 


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to run g_tune_pme in cluster?

2012-04-26 Thread Albert

Hello:
  Does anybody have any idea how to run g_tune_pme in a cluster? I 
tried many times with following command:


g_tune_pme_d -v -s npt_01.tpr -o npt_01.trr -cpo npt_01.cpt -g 
npt_01.log -launch -nt 24 > log &


but it always failed.


Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[[CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is 
attempting to be sent to a process whose contact information is unknown 
in file rml_oob_send.c at line 105

[CUDANodeA:03384] [[60523,1],22] could not get route to [[INVALID],INVALID]
[CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting 
to be sent to a process whose contact information is unknown in file 
base/plm_base_proxy.c at line 86


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to run g_tune_pme in cluster?

2012-04-26 Thread Albert

hello:
  it can find mdrun correctly. and it is only give me the log file as I 
mentioned in previous thread.


thank you very much

On 04/26/2012 09:53 AM, Carsten Kutzner wrote:

Hi,

what output does g_tune_pme provide? What is in "log" and in
"perf.out"?
Can it find the correct mdrun / mpirun executables?

Carsten


On Apr 26, 2012, at 9:28 AM, Albert wrote:


Hello:
  Does anybody have any idea how to run g_tune_pme in a cluster? I tried many 
times with following command:

g_tune_pme_d -v -s npt_01.tpr -o npt_01.trr -cpo npt_01.cpt -g npt_01.log -launch -nt 
24>  log&

but it always failed.


Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[[CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting to 
be sent to a process whose contact information is unknown in file 
rml_oob_send.c at line 105
[CUDANodeA:03384] [[60523,1],22] could not get route to [[INVALID],INVALID]
[CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting to be 
sent to a process whose contact information is unknown in file 
base/plm_base_proxy.c at line 86

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or 
send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPCR MD Tutorial Using GROMACS (URL)

2012-04-26 Thread Albert

it seesm to be good.
just one pieces of advices, why not use CHARMM36 for this tutorial ? 
since it is the best FF for lipids currently.


On 04/26/2012 11:14 AM, Anirban Ghosh wrote:

Hi ALL,

I have prepared a step-wise tutorial for running a MD simulation of a 
GPCR protein inserted in a lipid bilayer. It can be found at the 
following URL:


https://sites.google.com/site/anirbanzz/gpcr-gromacs-tutorial

I sincerely hope it will help people who are new to such simulations 
and the GROMACS community in general. This tutorial is adapted from 
the membrane protein tutorial prepared by Justin Lemkul.



Regards,

Anirban




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] GPCR MD Tutorial Using GROMACS (URL)

2012-04-26 Thread Albert

Hello Anirban:

  thanks for kind comments.
  How long did you mean " fairly long simulation time" ? does 1u ns 
belongs to this range? CHARMM36 ff is available in gromacs website and 
we can download it and put them into top directory and then it works. It 
is not need to make any modification by ourselves.


best
Albert


On 04/26/2012 11:53 AM, Anirban Ghosh wrote:

Hello Albert,

Thanks.
Yes, CHARMM36 indeed handles lipids very well. But currently GROMACS 
4.5.5 provides only the option for CHARMM27 FF and I found that ff43a1 
very well preserves the characters of both the protein as well as the 
lipids for fairly long simulation time, hence I used that FF in the 
tutorial. But one can surely add CHARMM36 to GROAMCS by doing all the 
necessary topology conversions.



Regards,

Anirban


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to run g_tune_pme in cluster?

2012-04-26 Thread Albert
 and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2010, The GROMACS development team at
Uppsala University & The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

:-)  mdrun  (-:

Program: mdrun
Version:  VERSION 4.6-dev-20120423-25c75
GIT SHA1 hash:25c752a51955337dc61d80a180ed9efa26f2121f
Branched from:25c752a51955337dc61d80a180ed9efa26f2121f (0 newer 
local commits)

Precision:single
Parallellization: thread_mpi
FFT Library:  fftw3






On 04/26/2012 12:07 PM, Carsten Kutzner wrote:

On Apr 26, 2012, at 11:37 AM, Albert wrote:


hello:
  it can find mdrun correctly. and it is only give me the log file as I 
mentioned in previous thread.

What files are produced by g_tune_pme?
Is there a benchtest.log? Can you cat its contents?

Carsten

thank you very much

On 04/26/2012 09:53 AM, Carsten Kutzner wrote:

Hi,

what output does g_tune_pme provide? What is in "log" and in
"perf.out"?
Can it find the correct mdrun / mpirun executables?

Carsten


On Apr 26, 2012, at 9:28 AM, Albert wrote:


Hello:
  Does anybody have any idea how to run g_tune_pme in a cluster? I tried many 
times with following command:

g_tune_pme_d -v -s npt_01.tpr -o npt_01.trr -cpo npt_01.cpt -g npt_01.log -launch -nt 
24>   log&

but it always failed.


Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[[CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting to 
be sent to a process whose contact information is unknown in file 
rml_oob_send.c at line 105
[CUDANodeA:03384] [[60523,1],22] could not get route to [[INVALID],INVALID]
[CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting to be 
sent to a process whose contact information is unknown in file 
base/plm_base_proxy.c at line 86

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or 
send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or 
send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPCR MD Tutorial Using GROMACS (URL)

2012-04-26 Thread Albert

Hi Anirban:
   how many ns/day for your simulations? Did you use PME?

best
Albert


On 04/26/2012 12:59 PM, Anirban Ghosh wrote:

Hello Albert,

Good to know that!
I have carried out simulations using this FF in the range of 600 ns.

Regards,

Anirban


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] blue gene running error

2012-04-27 Thread Albert

hello:
  I am running NPT on a blue gene cluster, but the jobs always failed 
with following messages. However, everything goes well if I run it on my 
local cluster:



---log---
ol 0.66! imb F  6% pme/F 0.45 step 900, will finish Mon Apr 30 04:46:31 2012
vol 0.66! imb F  6% pme/F 0.46 step 1000, will finish Mon Apr 30 
04:41:19 2012


Step 1053, time 2.106 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000303, max 0.005164 (between atoms 18949 and 18948)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
  18949  18948   42.30.1112   0.1117  0.

Step 1054, time 2.108 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 3.732914, max 37.515310 (between atoms 18947 and 18945)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
  18948  18949   90.00.1117   4.2691  0.
  18947  18945   90.00.1110   4.2791  0.

Step 1054, time 2.108 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 1.580342, max 37.414397 (between atoms 18949 and 18948)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
  18949  18948   90.00.1117   4.2678  0.
  18945  18947   89.90.1110   4.2789  0.
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates

Step 1055:

Step 1055:
The charge group starting at atom 18949 moved than the distance allowed 
by the domain decomposition (0.920188) in direction Z
The charge group starting at atom 18947 moved than the distance allowed 
by the domain decomposition (0.920188) in direction Z

distance out of cell -2.676795
distance out of cell 1.235327
Old coordinates:5.6670.0057.374
Old coordinates:5.7926.0197.474
New coordinates:3.6336.1613.620
New coordinates:8.203   -0.016   10.910
Old cell boundaries in direction Z:6.2957.449
Old cell boundaries in direction Z:7.3519.673
New cell boundaries in direction Z:6.2977.450
New cell boundaries in direction Z:7.3579.674


Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/domdec.c, line: 4124

Fatal error:
A charge group moved too far between two domain decomposition steps
This usually means that your system is not well equilibrated
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"They Were So Quiet About It" (Pixies)

Error on node 53, will try to stop all the nodes
Halting parallel program mdrun_mpi_bg on CPU 53 out of 64

---
Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/domdec.c, line: 4124

Fatal error:
A charge group moved too far between two domain decomposition steps
This usually means that your system is not well equilibrated
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"They Were So Quiet About It" (Pixies)

Error on node 62, will try to stop all the nodes
Halting parallel program mdrun_mpi_bg on CPU 62 out of 64

gcq#37: "They Were So Quiet About It" (Pixies)

Abort(-1) on node 53 (rank 53 in comm 1140850688): application called 
MPI_Abort(MPI_COMM_WORLD, -1) - process 53


gcq#37: "They Were So Quiet About It" (Pixies)

Abort(-1) on node 62 (rank 62 in comm 1140850688): application called 
MPI_Abort(MPI_COMM_WORLD, -1) - process 62



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] blue gene running error

2012-04-27 Thread Albert

hello Mar:
  thanks a lot for kind reply.
  From the link you you mentioned, it seems that this problem comes 
from the MD system itself. However, it goes well in my workstation. 
Moreover, I visualized and analyzed the results from my workstation 
running, everything goes well. I don't find any problem with it.


But I don't know why it doesn't work in the blue gene computer.

THX

ALbert



On 04/28/2012 07:36 AM, Mark Abraham wrote:

On 28/04/2012 2:04 PM, Albert wrote:

hello:
  I am running NPT on a blue gene cluster, but the jobs always failed 
with following messages. However, everything goes well if I run it on 
my local cluster:


Systems with marginally stable initial conditions can do this. See 
http://www.gromacs.org/Documentation/Errors#A_charge_group_moved_too_far_between_two_domain_decomposition_steps.


Mark




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] blue gene running error

2012-04-27 Thread Albert

hello Makr:
  thanks a lot for kind reply.
  From the link you you mentioned, it seems that this problem comes 
from the MD system itself. However, it goes well in my workstation. 
Moreover, I visualized and analyzed the results from my workstation 
running, everything goes well. I don't find any problem with it.


But I don't know why it doesn't work in the blue gene computer.

THX

ALbert



On 04/28/2012 07:36 AM, Mark Abraham wrote:

On 28/04/2012 2:04 PM, Albert wrote:

hello:
  I am running NPT on a blue gene cluster, but the jobs always failed 
with following messages. However, everything goes well if I run it on 
my local cluster:


Systems with marginally stable initial conditions can do this. See 
http://www.gromacs.org/Documentation/Errors#A_charge_group_moved_too_far_between_two_domain_decomposition_steps.


Mark




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] a question about ensemble

2012-05-02 Thread Albert

hello:

  I wondering are the three thermostat methods: Langevin, Berendsen and 
Nose-Hoover chain are all compatible with semi-isotropy coupling style? 
If I would like to use semi-isotropy coupling method, which one would be 
better?


thank you very much

best
Albert
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] a question about ensemble

2012-05-03 Thread Albert

Hello Flo:

  thank you so much for your kind comments.
  Yes, I would like to couple the pressure, it really helps a lot.

best
Albert

On 05/03/2012 10:40 AM, Dommert Florian wrote:

On Thu, 2012-05-03 at 07:32 +0200, Albert wrote:

hello:

   I wondering are the three thermostat methods: Langevin, Berendsen
and Nose-Hoover chain are all compatible with semi-isotropy coupling
style? If I would like to use semi-isotropy coupling method, which one
would be better?

thank you very much


Hi,

what should be coupled in a semi-isotropic manner ? I assume the
pressure and now the question is, which thermostat to apply, isn't it?

The three mentioned barostats are all of different kinds. While Langevin
provides a thermostating method for implicit solvent, the other
mentioned Thermostats are based on an explicit atom description of the
system. However, the Berendsen thermostat quite old and not symplectic,
which means that the phase space volume is not conserved. Fortunately,
an updated method, the v-rescale thermostat of Bussi et al, has been
published some years ago. It is quite similar to the Berendsen
thermostat, but symplectic and suitable for production and
equilibration. Finally the Nose-Hoover chain (NHC) is based on a
extended Lagrangian for the system you want to simulate and
corresponding equations of motions are applied in order to keep the
temperature constant. NHC is symplectic, too, but not suitable for
equilibration. However, as the only reasonable method for anisotropic
pressure coupling is the Parrinello-Rahman (PR) barostat, or its
extended version MTTK, which relies on the same idea as NHC, I would
assume, that for production a combination of NHC and MTTK is a good
choice. For the equilibration I would use a v-rescale thermostat and the
Berendsen barostat, because PR and MTTK would take far too much time to
achieve equilibrium.

Hence, it much depends on the purpose, which combination of thermo- and
barostat is the most suitable one.

/Flo




best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] how to extract trajectories into individual pdb file?

2012-05-03 Thread Albert

hello:

  I've finished a MD job and I am wondering how can we extract 
individual pdb from trajectories in Gromacs? each time I always get a 
single pdb contains lots of snapshots.


thank you very much
best
Albert
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] how to extract trajectories into individual pdb file?

2012-05-03 Thread Albert

On 05/03/2012 05:12 PM, francesco oteri wrote:

In particular, look at the option -sep



thank you for kind reply. but how to superimposed the left snapshot with 
the first one?


thanks again for helps
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] how to extract trajectories into individual pdb file?

2012-05-03 Thread Albert

thank you very much.

I found a problem : there is no option to select step. eg:
 I would like to export one snapshot at each 10ps, I don't find such 
kind of options..


THX

On 05/03/2012 05:21 PM, francesco oteri wrote:

-fit

2012/5/3 Albert mailto:mailmd2...@gmail.com>>

On 05/03/2012 05:12 PM, francesco oteri wrote:

In particular, look at the option -sep



thank you for kind reply. but how to superimposed the left
snapshot with the first one?

thanks again for helps

--
gmx-users mailing list gmx-users@gromacs.org
<mailto:gmx-users@gromacs.org>
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
<mailto:gmx-users-requ...@gromacs.org>.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
Cordiali saluti, Dr.Oteri Francesco




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] amber2xtc.py error

2012-05-28 Thread Albert

hello:

  I am trying to use amber2xtc.py script to convert Amber MD system 
into gromacs format by command:


python amber2xtc.py npt3.mdcrd apo.prmtop . *.rst md_gromacs

however, I got the following messages

--log
 USAGE : python amber2xtc.py AMBERCRD AMBERTOP TRAJDIR TRAJPATTERN 
OUTPUTPREFIX
  Example : python amber2xtc.py mdcrd.crd mdcrd.top md *.x.gz 
md_gromacs

  Note that the AmberCrd can also be a PDB file.

Will convert the following files :
['m1.rst']
currently converting m1.rst
ls: cannot access *.pdb.*: No such file or directory
--

I am wondering how to fix this problem?

thank you very much
A.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Fwd: amber2xtc.py error

2012-05-28 Thread Albert


hello:

  I am trying to use amber2xtc.py script to convert Amber MD system
into gromacs format by command:

python amber2xtc.py npt3.mdcrd apo.prmtop . *.rst md_gromacs

however, I got the following messages

--log
 USAGE : python amber2xtc.py AMBERCRD AMBERTOP TRAJDIR TRAJPATTERN
OUTPUTPREFIX
  Example : python amber2xtc.py mdcrd.crd mdcrd.top md *.x.gz
md_gromacs
  Note that the AmberCrd can also be a PDB file.

Will convert the following files :
['m1.rst']
currently converting m1.rst
ls: cannot access *.pdb.*: No such file or directory
--

I am wondering how to fix this problem?

thank you very much
A.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GPU running problem with GMX-4.6 beta2

2012-12-17 Thread Albert

hello:

 I am running GMX-4.6 beta2 GPU work in a 24 CPU core workstation with 
two GTX590, it stacked there without any output i.e the .xtc file size 
is always 0 after hours of running. Here is the md.log file I found:



Using CUDA 8x8x8 non-bonded kernels

Potential shift: LJ r^-12: 0.112 r^-6 0.335, Ewald 1.000e-05
Initialized non-bonded Ewald correction tables, spacing: 7.82e-04 size: 1536

Removing pbc first time
Pinning to Hyper-Threading cores with 12 physical cores in a compute node
There are 1 flexible constraints

WARNING: step size for flexible constraining = 0
 All flexible constraints will be rigid.
 Will try to keep all flexible constraints at their original 
length,

 but the lengths may exhibit some drift.

Initializing Parallel LINear Constraint Solver
Linking all bonded interactions to atoms
There are 161872 inter charge-group exclusions,
will use an extra communication step for exclusion forces for PME

The initial number of communication pulses is: X 1
The initial domain decomposition cell size is: X 1.83 nm

The maximum allowed distance for charge groups involved in interactions is:
 non-bonded interactions   1.200 nm
(the following are initial values, they could change due to box deformation)
two-body bonded interactions  (-rdd)   1.200 nm
  multi-body bonded interactions  (-rdd)   1.200 nm
  atoms separated by up to 5 constraints  (-rcon)  1.826 nm

When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: X 1
The minimum size for domain decomposition cells is 1.200 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: X 0.66
The maximum allowed distance for charge groups involved in interactions is:
 non-bonded interactions   1.200 nm
two-body bonded interactions  (-rdd)   1.200 nm
  multi-body bonded interactions  (-rdd)   1.200 nm
  atoms separated by up to 5 constraints  (-rcon)  1.200 nm

Making 1D domain decomposition grid 4 x 1 x 1, home cell index 0 0 0

Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  Protein_LIG_POPC
  1:  Water_and_ions

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
  --- Thank You ---  



THX
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU running problem with GMX-4.6 beta2

2012-12-17 Thread Albert
 order in the expansion of the 
constraint coupling matrix (related to accuracy)


; Parameters for treating electrostatic interactions
coulombtype = PME   ; Long range electrostatic interactions 
treatment (cut-off, Ewald, PME)
pme_order   = 4 ; Interpolation order for PME (cubic 
interpolation is represented by 4)
fourierspacing  = 0.12  ; Maximum grid spacing for FFT grid 
using PME (nm)


; Temperature coupling parameters
tcoupl  = V-rescale ; Modified Berendsen thermostat 
using velocity rescaling
tc-grps = Protein_LIG POPC Water_and_ions ; Define groups to be 
coupled separately to temperature bath
tau_t   = 0.1   0.1 0.1 ; Group-wise coupling time 
constant (ps)
ref_t   = 303   303 303 ; Group-wise reference 
temperature (K)


; Pressure coupling parameters
pcoupl  = no; Under NVT conditions pressure coupling 
is not done


; Miscellaneous control parameters
; Dispersion correction
DispCorr= EnerPres  ; Dispersion corrections for Energy and 
Pressure for vdW cut-off

; Initial Velocity Generation
gen_vel = yes   ; Generate velocities from Maxwell 
distribution at given temperature
gen_temp= 303   ; Specific temperature for Maxwell 
distribution (K)
gen_seed= -1; Use random seed for velocity 
generation (integer; -1 means seed is calculated from the process ID number)

; Centre of mass (COM) motion removal relative to the specified groups
nstcomm = 1 ; COM removal frequency (steps)
comm_mode   = Linear; Remove COM translation (linear 
/ angular / no)
comm_grps   = Protein_LIG_POPC Water_and_ions ; COM removal relative 
to the specified groups


THX





On 12/17/2012 05:45 PM, Szilárd Páll wrote:

Hi,

That unfortunately tell exactly about the reason why mdrun is stuck. Can
you reproduce the issue on another machines or with different launch
configurations? At which step does it get stuck (-stepout 1 can help)?

Please try the following:
- try running on a single GPU;
- try running on CPUs only (-nb cpu and to match closer the GPU setup with
-ntomp 12);
- try running in GPU emulation mode with the GMX_EMULATE_GPU=1 env. var
set (and to match closer the GPU setup with -ntomp 12)
- provide a backtrace (using gdb).

Cheers,

--
Szilárd



On Mon, Dec 17, 2012 at 5:37 PM, Albert  wrote:


hello:

  I am running GMX-4.6 beta2 GPU work in a 24 CPU core workstation with two
GTX590, it stacked there without any output i.e the .xtc file size is
always 0 after hours of running. Here is the md.log file I found:


Using CUDA 8x8x8 non-bonded kernels

Potential shift: LJ r^-12: 0.112 r^-6 0.335, Ewald 1.000e-05
Initialized non-bonded Ewald correction tables, spacing: 7.82e-04 size:
1536

Removing pbc first time
Pinning to Hyper-Threading cores with 12 physical cores in a compute node
There are 1 flexible constraints

WARNING: step size for flexible constraining = 0
  All flexible constraints will be rigid.
  Will try to keep all flexible constraints at their original
length,
  but the lengths may exhibit some drift.

Initializing Parallel LINear Constraint Solver
Linking all bonded interactions to atoms
There are 161872 inter charge-group exclusions,
will use an extra communication step for exclusion forces for PME

The initial number of communication pulses is: X 1
The initial domain decomposition cell size is: X 1.83 nm

The maximum allowed distance for charge groups involved in interactions is:
  non-bonded interactions   1.200 nm
(the following are initial values, they could change due to box
deformation)
 two-body bonded interactions  (-rdd)   1.200 nm
   multi-body bonded interactions  (-rdd)   1.200 nm
   atoms separated by up to 5 constraints  (-rcon)  1.826 nm

When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: X 1
The minimum size for domain decomposition cells is 1.200 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: X 0.66
The maximum allowed distance for charge groups involved in interactions is:
  non-bonded interactions   1.200 nm
 two-body bonded interactions  (-rdd)   1.200 nm
   multi-body bonded interactions  (-rdd)   1.200 nm
   atoms separated by up to 5 constraints  (-rcon)  1.200 nm

Making 1D domain decomposition grid 4 x 1 x 1, home cell index 0 0 0

Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
   0:  Protein_LIG_POPC
   1:  Water_and_ions

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
  --- Thank You --- -

Re: [gmx-users] GPU running problem with GMX-4.6 beta2

2012-12-17 Thread Albert

On 12/17/2012 06:08 PM, Szilárd Páll wrote:

Hi,

How about GPU emulation or CPU-only runs? Also, please try setting the
number of therads to 1 (-ntomp 1).


--
Szilárd



hello:

I am running in GPU emulation mode with the GMX_EMULATE_GPU=1 env. var
set (and to match closer the GPU setup with -ntomp 12), it failed with log:

Back Off! I just backed up step33b.pdb to ./#step33b.pdb.2#

Back Off! I just backed up step33c.pdb to ./#step33c.pdb.2#
Wrote pdb files with previous and current coordinates
[CUDANodeA:20753] *** Process received signal ***
[CUDANodeA:20753] Signal: Segmentation fault (11)
[CUDANodeA:20753] Signal code: Address not mapped (1)
[CUDANodeA:20753] Failing at address: 0x106ae6a00

[1]Segmentation faultmdrun_mpi -v -s nvt.tpr -c nvt.gro -g 
nvt.log -x nvt.xtc -ntomp 12




I also tried , number of therads to 1 (-ntomp 1), it failed with following 
messages:


Back Off! I just backed up step33c.pdb to ./#step33c.pdb.1#
Wrote pdb files with previous and current coordinates
[CUDANodeA:20740] *** Process received signal ***
[CUDANodeA:20740] Signal: Segmentation fault (11)
[CUDANodeA:20740] Signal code: Address not mapped (1)
[CUDANodeA:20740] Failing at address: 0x1f74a96ec
[CUDANodeA:20740] [ 0] /lib64/libpthread.so.0(+0xf2d0) [0x2b351d3022d0]
[CUDANodeA:20740] [ 1] /opt/gromacs-4.6/lib/libmd_mpi.so.6(+0x11020f) 
[0x2b351a99c20f]
[CUDANodeA:20740] [ 2] /opt/gromacs-4.6/lib/libmd_mpi.so.6(+0x111c94) 
[0x2b351a99dc94]
[CUDANodeA:20740] [ 3] 
/opt/gromacs-4.6/lib/libmd_mpi.so.6(gmx_pme_do+0x1d2e) [0x2b351a9a1bae]
[CUDANodeA:20740] [ 4] 
/opt/gromacs-4.6/lib/libmd_mpi.so.6(do_force_lowlevel+0x1eef) 
[0x2b351a97262f]
[CUDANodeA:20740] [ 5] 
/opt/gromacs-4.6/lib/libmd_mpi.so.6(do_force_cutsVERLET+0x1756) 
[0x2b351aa04736]
[CUDANodeA:20740] [ 6] 
/opt/gromacs-4.6/lib/libmd_mpi.so.6(do_force+0x3bf) [0x2b351aa0a0df]

[CUDANodeA:20740] [ 7] mdrun_mpi(do_md+0x8133) [0x4334c3]
[CUDANodeA:20740] [ 8] mdrun_mpi(mdrunner+0x19e9) [0x411639]
[CUDANodeA:20740] [ 9] mdrun_mpi(main+0x17db) [0x4373db]
[CUDANodeA:20740] [10] /lib64/libc.so.6(__libc_start_main+0xfd) 
[0x2b351d52ebfd]

[CUDANodeA:20740] [11] mdrun_mpi() [0x407f09]
[CUDANodeA:20740] *** End of error message ***

[1]Segmentation faultmdrun_mpi -v -s nvt.tpr -c nvt.gro 
-g nvt.log -x nvt.xtc -ntomp 1




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU running problem with GMX-4.6 beta2

2012-12-17 Thread Albert

well, that's one of the log files.
I've tried both

VERSION 4.6-dev-20121004-5d6c49d
VERSION 4.6-beta1
VERSION 4.6-beta2
and the latest 5.0 by git.

the problems are the same.:-(




On 12/17/2012 07:56 PM, Mark Abraham wrote:

On Mon, Dec 17, 2012 at 6:01 PM, Albert  wrote:


>hello:
>
>  I reduced the GPU to two, and it said:
>
>Back Off! I just backed up nvt.log to ./#nvt.log.1#
>Reading file nvt.tpr, VERSION 4.6-dev-20121004-5d6c49d (single precision)
>

This is a development version from October 1. Please use the mdrun version
you think you're using:-)

Mark
--


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU running problem with GMX-4.6 beta2

2012-12-18 Thread Albert

On 12/17/2012 08:06 PM, Justin Lemkul wrote:
It seems to me that the system is simply crashing like any other that 
becomes unstable.  Does the simulation run at all on plain CPU?


-Justin 



Thank you very much Justin, it's really helpful. I've checked that the 
structure after minization and found that there is some problem with my 
ligand. I regenerated the ligand toplogy with acpype, and resubmit for 
mimization and NVT. Now it goes well. So probably the problems comes 
from the incorrect ligand topolgy which make the system very unstable.


best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Pre-equilibrated CHARMM lipid bilayers

2012-12-20 Thread Albert

On 12/20/2012 09:13 AM, pcl wrote:

Well what works for me is I convert cgenff and merge it with charmm36 (you only 
have to do this once per cgenff version), then I have paramchem generate cgenff 
charges for the ligand. Then I convert the output of paramchem (charges) to 
.rtp format. I also have to create .hdb entries. Paramchem may also generate 
additional cgenff atom interactions (dihedrals or impropers) that may not exist 
by default, I usually convert and add those to forcefield's .itp files. Then 
pdb2gmx will work on the ligand pdb.



but isn't there is a script to do so in Gromacs webiste, which can 
convert the output from parachem into Gromacs .itp format? although I 
didn't try it hard, because I don't find any documentation to use it 
correctly.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] voltage for membrane?

2012-12-22 Thread Albert

Dear all:

  As we know that many membrane has a membrane potential with typical 
range from -40mV to 80mV. I am just wondering is it possible to add 
voltage for membrane protein simulation in Gromacs? If yes, and how? I 
go through the .mdp documentation and don't find anything concerning on 
this.


THX
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to convert CGenFF into .itp file?

2012-12-26 Thread Albert

hello:

   I found the script charmm2gromacs-pvm.py 
<http://www.gromacs.org/@api/deki/files/185/=charmm2gromacs-pvm.py> 
which claimed could convert the output from CGenFF into Gromacs format. 
However, I tried many times and it always failed even with the advices 
from previous thread. This script is trying to generate something like 
what we see in a complete forcefiled folder instead of a single .itp 
file for ligand.


  I am just wondering, how can we convert the output from CGenFF into a 
single .itp file which is similar to the one from Swissparam?


thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to convert CGenFF into .itp file?

2012-12-26 Thread Albert

On 12/26/2012 12:18 PM, Peter C. Lai wrote:

You don't. CGenFF is a forcefield, like CHARMM36. You install it, add rtp
entries then use pdb2gmx to generate a ligand's topology .itp file


THX
but the problem is how to use this script? I've already download the 
latest CGenFF file from CHARMM FF websiteIt is a folder.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to convert CGenFF into .itp file?

2012-12-26 Thread Albert

On 12/26/2012 12:39 PM, Peter C. Lai wrote:

It should come with two files. A .prm file, which contains the actual
forcefield parameters that you use the script to convert to bonded and
nonbonded .itp and atomtypes.atp The .rtf file is the charmm equivalent of
our .rtp file: it contains some premade residue topologies with charge and
connectivity information. I don't know if there are scripts to convert this or
not, but it's easy enough to get what you need by hand especially since if
your ligand isn't in there, you'll have to create the .rtp entry on your own
or get them from paramchem anyway...


THX for comments. It works now and I get a folder called cgenff-2b7.ff 
like what we seen in the share/top folder for other FF.



that's too complicated to real use. Initially,  I thought that the 
output for the ligand should be a single .itp file like what we found in 
Swissparam.


Probably one can consider improve this script. As far as I know the 
CGenFF website can export full parameters for the ligand even it is 
already exist in CGenFF off line files. In this cases, the output file 
fro CGenFF website is independent from the offline FF and it already has 
complete necessary information for paramters and topology). Probably one 
can consider improve this script and export the output file as a single 
.itp file.


best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to convert CGenFF into .itp file?

2012-12-26 Thread Albert

On 12/26/2012 07:53 PM, David van der Spoel wrote:
Hey, it's open source. Let us know how it goes 


you can simple create an account and login

https://www.paramchem.org/

after your login, click "upload molecule" in left panel. Now you will 
see the option:


"Include parameters that are already in CGenFF"

tick this option and the server will generate a full version of ligand 
topology  which could be  independent from the offline CGenFF. Now what 
we need is just improve the script and convert it into a single .itp 
file into Gromacs. I think this would be the best solution.


Here is an example output for Methanol molecule from CGenFF


--example--

* Toppar stream file generated by
* CHARMM General Force Field (CGenFF) program version 0.9.6 beta
* For use with CGenFF version 2b7
*

read rtf card append
* Topologies generated by
* CHARMM General Force Field (CGenFF) program version 0.9.6 beta
*
36 1

! "penalty" is the highest penalty score of the associated parameters.
! Penalties lower than 10 indicate the analogy is fair; penalties between 10
! and 50 mean some basic validation is recommended; penalties higher than
! 50 indicate poor analogy and mandate extensive validation/optimization.

RESI 8870.000 ! param penalty=   0.000 ; charge penalty=   0.000
GROUP! CHARGE   CH_PENALTY
ATOM O  OG311  -0.651 !0.000
ATOM C  CG331  -0.039 !0.000
ATOM H1 HGA30.090 !0.000
ATOM H2 HGA30.090 !0.000
ATOM H3 HGA30.090 !0.000
ATOM H4 HGP10.420 !0.000

BOND OC
BOND OH4
BOND CH1
BOND CH2
BOND CH3

END

read param card flex append
* Parameters generated by analogy by
* CHARMM General Force Field (CGenFF) program version 0.9.6 beta
*

! Penalties lower than 10 indicate the analogy is fair; penalties between 10
! and 50 mean some basic validation is recommended; penalties higher than
! 50 indicate poor analogy and mandate extensive validation/optimization.

BONDS
CG331  OG311   428.00 1.4200 ! PROT methanol vib fit EMB 11/21/89
CG331  HGA3322.00 1.1110 ! PROT alkane update, adm jr., 3/2/92
OG311  HGP1545.00 0.9600 ! PROT EMB 11/21/89 methanol vib fit; og 
tested on MeOH EtOH,...

ANGLES
OG311  CG331  HGA3 45.90108.89 ! PROT MeOH, EMB, 10/10/89
HGA3   CG331  HGA3 35.50108.405.40   1.80200 ! PROT alkane update, 
adm jr., 3/2/92
CG331  OG311  HGP1 57.50106.00 ! Team Sugar, HCP1M OC311M CC331M; 
unchanged

DIHEDRALS
HGA3   CG331  OG311  HGP1   0.1800  3 0.00 ! og methanol

IMPROPERS

END
RETURN


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to indicate solvent flexibility?

2013-01-08 Thread Albert

hello:

  I've finished a 60ns MD simulation with Gromacs and I found that the 
flixbility of solvent molecules inside the protein is different when it 
binds with different ligands: ie. in one case the solvent can move very 
fast with bulk environment, and in other case the solvent forms type 
Hbonds with resdiues inside protein.  I am just wondering how which 
module of Gromacs can I use to indicate the solvent difference in 
flexibility? Is it possible to calculate the entropy in certain region 
(let's say: 20

thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_select error

2013-01-10 Thread Albert

hello:

 I am trying to use g_select to make an index file with command:


g_select_mpi -f md.xtc -s npt3.pdb -on density.ndx

but it failed with messages:

WARNING: Masses and atomic (Van der Waals) radii will be guessed
 based on residue and atom names, since they could not be
 definitively assigned from the information in your input
 files. These guessed numbers might deviate from the mass
 and radius of the atom type. Please check the output
 files if necessary.

Assertion failed for "g" in file 
/home/albert/Desktop/gromacs-4.6-beta3/src/gmxlib/sel

dump core ? (y/n)


thank you very much
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to indicate solvent flexibility?

2013-01-10 Thread Albert

Hello Justin and Leandro:

  thanks a lot for kind advices. I am trying to us the g_msd to 
calculate the density:


first I made a index file called density.ndx with g_select, defined the 
solvent within 6A of a residue


after that I try to run g_msd with command:

g_msd_mpi -f md.xtc -s analysis.tpr -n density.ndx -mol diff_mol.xvg -o 
msd.xvg


a dialoug popped up with above command:


.
Group 13963 (close_27926.000) has 7 elements
Group 13964 (close_27928.000) has 5 elements
Group 13965 (close_27930.000) has 7 elements
Group 13966 (close_27932.000) has 7 elements
Group 13967 (close_27934.000) has 7 elements
Group 13968 (close_27936.000) has 8 elements
Group 13969 (close_27938.000) has 9 elements
Group 13970 (close_27940.000) has 6 elements
Group 13971 (close_27942.000) has 9 elements
Group 13972 (close_27944.000) has 9 elements
Group 13973 (close_27946.000) has 9 elements
Group 13974 (close_27948.000) has10 elements
Group 13975 (close_27950.000) has 9 elements
Group 13976 (close_27952.000) has12 elements
Group 13977 (close_27954.000) has10 elements
Group 13978 (close_27956.000) has 9 elements
Group 13979 (close_27958.000) has10 elements
Group 13980 (close_27960.000) has 8 elements
Group 13981 (close_27962.000) has10 elements
Group 13982 (close_27964.000) has10 elements
Group 13983 (close_27966.000) has 7 elements
Group 13984 (close_27968.000) has 9 elements
Group 13985 (close_27970.000) has 8 elements
Group 13986 (close_27972.000) has 8 elements
Group 13987 (close_27974.000) has 7 elements
Group 13988 (close_27976.000) has 9 elements
Group 13989 (close_27978.000) has 6 elements
Group 13990 (close_27980.000) has 9 elements
Group 13991 (close_27982.000) has 8 elements
Group 13992 (close_27984.000) has 8 elements
Group 13993 (close_27986.000) has11 elements
Group 13994 (close_27988.000) has10 elements
Group 13995 (close_27990.000) has11 elements
Group 13996 (close_27992.000) has10 elements
Group 13997 (close_27994.000) has11 elements
Group 13998 (close_27996.000) has11 elements
Group 13999 (close_27998.000) has 9 elements
Group 14000 (close_28000.000) has12 elements


I select 14000 which is the last one, but it failed with messages:

rogram g_msd_mpi, VERSION 4.5.5-dev-20121121-3e633d4
Source code file: /home/albert/software/gromacs/src/tools/gmx_msd.c, 
line: 739


Fatal error:
The index group does not consist of whole molecules
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Can't You Make This Thing Go Faster ?" (Black Crowes)


thank you very much
Albert





On 01/08/2013 06:15 PM, Justin Lemkul wrote:



On 1/8/13 11:42 AM, Albert wrote:

hello:

   I've finished a 60ns MD simulation with Gromacs and I found that the
flixbility of solvent molecules inside the protein is different when 
it binds
with different ligands: ie. in one case the solvent can move very 
fast with bulk
environment, and in other case the solvent forms type Hbonds with 
resdiues
inside protein.  I am just wondering how which module of Gromacs can 
I use to
indicate the solvent difference in flexibility? Is it possible to 
calculate the

entropy in certain region (let's say: 20

It sounds like g_rmsf and g_msd may be useful here.  The only way to 
specify geometric criteria for index groups is to use g_select, but 
then the analysis has to be done on each individual frame, not the 
trajectory.  Dynamic selections will be more conveniently implemented 
in a future Gromacs version.


-Justin



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to indicate solvent flexibility?

2013-01-10 Thread Albert

On 01/10/2013 11:14 AM, David van der Spoel wrote:

On 2013-01-10 10:45, Albert wrote:

Hello Justin and Leandro:

   thanks a lot for kind advices. I am trying to us the g_msd to
calculate the density:


try g_msd -h
wrong tool.


that's strange. Here is the information which I think it is what I want.


g_msd -h


DESCRIPTION
---
g_msd computes the mean square displacement (MSD) of atoms from a set of
initial positions. This provides an easy way to compute the diffusion
constant using the Einstein relation. The time between the reference points
for the MSD calculation is set with -trestart. The diffusion constant is
calculated by least squares fitting a straight line (D*t + c) through the
MSD(t) from -beginfit to -endfit (note that t is time from the reference
positions, not simulation time). An error estimate given, which is the
difference of the diffusion coefficients obtained from fits over the two
halves of the fit interval.

There are three, mutually exclusive, options to determine different types of
mean square displacement: -type, -lateral and -ten. Option -ten writes the
full MSD tensor for each group, the order in the output is: trace xx yy 
zz yx

zx zy.

If -mol is set, g_msd plots the MSD for individual molecules (including
making molecules whole across periodic boundaries): for each individual
molecule a diffusion constant is computed for its center of mass. The chosen
index group will be split into molecules.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.6 release is ready!

2013-01-21 Thread Albert

How nice it is.

Cheers.

Albert


On 01/21/2013 09:09 AM, Mark Abraham wrote:

Hi GROMACS users,

The day is finally here - GROMACS 4.6 is out!

As you've probably heard by now, there are lots of wonderful new
performance features, including
* a native GPU implementation layer - thanks to some heroic work from
Szilard Pall and Berk Hess, with special thanks to Mark Berger & Duncan
Poole from NVIDIA for their excellent advice and support
* new implementation of Verlet kernels with guaranteed buffered
interactions, with very good energy conservation
* brand new classical nonbonded interaction kernels, supporting any of
SSE2, SSE4.1, AMD's 128-bit AVX with FMA support, or Intel's 256-bit AVX
SIMD acceleration, for 30-50% faster performance
* use of OpenMP for better intra-node scaling
* much improved automatic load balancing, including between direct-space
and PME nodes
* improvements to integration algorithms, including lots of new free energy
options

You can find the code, release notes, installation instructions and test
suite at the links below.

ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.tar.gz
http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.x
http://www.gromacs.org/Documentation/Installation_Instructions
http://gromacs.googlecode.com/files/regressiontests-4.6.tar.gz

If you have downloaded a 4.6 tarball already, we encourage you to get the
latest one - a last-minute bug fix forced us to change plans.

PDF installation instructions, and an updated manual release will follow in
the coming week.

A 4.5.6 bug-fix release has been prepared, and all its fixes are present in
4.6. We're still preparing its release notes, and will announce the release
shortly.

For those of you using our git repository, please be advised that that
last-minute bug fix required us to move the tag for the 4.6 release. Your
repo will not change that for you automatically.

Happy simulating!

The GROMACS development team


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] question about fftw3 in gromacs 4.6 installation

2013-01-21 Thread Albert

Hello:

 I am compiling the latest gromacs 4.6 with command:

cmake .. -DGMX_MPI=ON -DCMAKE_CXX_COMPILER=/soft/openmpi-1.4.3/bin/mpiCC 
-DCMAKE_C_COMPILER=//soft/openmpi-1.4.3/bin/mpicc 
-DCMAKE_INSTALL_PREFIX=/soft/gromacs4.6 -DGMX_GPU=OFF 
-DBUILD_SHARED_LIBS=OFF -DFFTW_INCLUDE_DIR=/soft/fftw-3.3.3/include


and I get get the following messages. It seems that it doesn't take my 
own compiled FFTW  but it used the system FFTW:


--
--   found fftw3f, version 3.1.2
-- Looking for fftwf_plan_r2r_1d in /usr/lib64/libfftw3f.so
-- Looking for fftwf_plan_r2r_1d in /usr/lib64/libfftw3f.so - found
-- Looking for fftwf_have_simd_avx in /usr/lib64/libfftw3f.so
-- Looking for fftwf_have_simd_avx in /usr/lib64/libfftw3f.so - not found
-- Looking for fftwf_have_simd_sse2 in /usr/lib64/libfftw3f.so
-- Looking for fftwf_have_simd_sse2 in /usr/lib64/libfftw3f.so - found
-- Looking for sgemm_
-- Looking for sgemm_ - found
-- Looking for cheev_
-- Looking for cheev_ - found
-- Checking for dlopen
-- Performing Test HAVE_DLOPEN
-- Performing Test HAVE_DLOPEN - Success
-- Checking for dlopen - found
-- Configuring done
-- Generating done
CMake Warning:
  Manually-specified variables were not used by the project:

FFTW_INCLUDE_DIR
--

What's more, I compiled fftw3 with options:
./configure --enable-sse --enable-float --with-pic 
--prefix=/soft/fftw-3.3.3 --enable-single --enable-static --enable-mpi


And I don't find libfftw3f.so in my installation directory:

ls /soft/fftw-3.3.3/lib
libfftw3f.a  libfftw3f.la  libfftw3f_mpi.a  libfftw3f_mpi.la pkgconfig

thank you very much
best
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] conversion of Gromacs trajectories (rhombic dodecahedric), and topologies to Amber format

2013-01-21 Thread Albert

probably you can try "catdcd"


On 01/21/2013 11:29 AM, Anna Marabotti wrote:

Dear gmx-users,
I followed the suggestions by Justin and Daniel to convert the 
trajectories, but still Amber does not recognize the correct format 
and complains about the fact that it does not find the correct box 
dimensions.
It seems that the two tools are quite incompatible, especially when 
the trajectory is not in the classic cubic format.
This is just for records since it is a recurrent query in the gmx-user 
archive, still apparently with no solution.

Many thanks in any case and best regards
Anna


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] question about fftw3 in gromacs 4.6 installation

2013-01-21 Thread Albert

On 01/21/2013 01:31 PM, Justin Lemkul wrote:
Your cmake command needs to use -DFFTWF_INCLUDE_DIR and 
-DFFTWF_LIBRARY to indicate the single-precision libraries (note that 
-DFFTW_INCLUDE_DIR and -DFFTWF_INCLUDE_DIR specify different things) 
or simply use -DCMAKE_PREFIX_PATH=/soft/fftw-3.3.3/ for convenience.


-Justin 


Hello Justin:

  thank you very much for kind comments.
  It works now.

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how can I make statics for Z-axis?

2013-01-21 Thread Albert

hello:

  I would like to make statics for an atom along Z-axis. I am just 
wondering how can I to do this in Gromacs?


thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how can I make statics for Z-axis?

2013-01-24 Thread Albert

HI Erik:

thanks a lot for kind advices, I will try it.

best
Albert

On 01/24/2013 03:00 PM, Erik Marklund wrote:

g_traj -nox -noy if I recall correctly.

On Jan 21, 2013, at 4:10 PM, Albert wrote:


hello:

 I would like to make statics for an atom along Z-axis. I am just 
wondering how can I to do this in Gromacs?


thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www 
interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] make_ndx error

2013-01-26 Thread Albert

Hello:

 I am using make_ndx to make a index file in Gromacs 4.6,

make_ndx -f input.pdb

 but it said:


Copied index group 1 'Protein'
Copied index group 25 'Water_and_ions'
One of your groups is not ascending
Group is empty

thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] make_ndx error

2013-01-26 Thread Albert

On 01/26/2013 06:53 PM, Justin Lemkul wrote:


What exactly did you enter at the make_ndx prompt?

-Justin


1|25

protein, water and ions


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] make_ndx error

2013-01-26 Thread Albert

On 01/26/2013 07:41 PM, Justin Lemkul wrote:


What types of ions do you have?  I can reproduce this problem for a 
protein with ions bound to it, which are numbered discontinuously with 
water and ions in solution.


-Justin


thank you for kind reply.

I only have Na+ and Cl-.

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] make_ndx error

2013-01-26 Thread Albert

On 01/26/2013 07:51 PM, Justin Lemkul wrote:


Can you please post the following:

1. The groups printed in the make_ndx prompt
2. The output of gmxcheck on an index file created from your 
coordinate file (created simply by typing 'q' at the prompt, i.e. not 
creating any special groups)


-Justin



make_ndx -f sys.pdb

 0 System  : 59870 atoms
  1 Protein :  4746 atoms
  2 Protein-H   :  2329 atoms
  3 C-alpha :   292 atoms
  4 Backbone:   877 atoms
  5 MainChain   :  1170 atoms
  6 MainChain+Cb:  1453 atoms
  7 MainChain+H :  1455 atoms
  8 SideChain   :  3291 atoms
  9 SideChain-H :  1159 atoms
 10 Prot-Masses :  4746 atoms
 11 non-Protein : 55124 atoms
 12 Other   : 18766 atoms
 13 NMA : 6 atoms
 14 POPC: 18760 atoms
 15 CL  :39 atoms
 16 NA  :34 atoms
 17 Ion :73 atoms
 18 NMA : 6 atoms
 19 POPC: 18760 atoms
 20 CL  :39 atoms
 21 NA  :34 atoms
 22 Water   : 36285 atoms
 23 SOL : 36285 atoms
 24 non-Water   : 23585 atoms
 25 Water_and_ions  : 36358 atoms



gmxcheck_mpi -f md.xtc -n index.ndx

Item#frames Timestep (ps)
Step   1260.1
Time   1260.1
Lambda   0
Coords 1260.1
Velocities   0
Forces   0
Box1260.1
Contents of index file index.ndx
--
Nr.   Group   #Entries   FirstLast
   0  System 59870   1   59870
   1  Protein 4746   14752
   2  Protein-H   2329   12332
   3  C-alpha  292   82323
   4  Backbone 877   22324
   5  MainChain   1170   22325
   6  MainChain+Cb1453   22326
   7  MainChain+H 1455   24751
   8  SideChain   3291   14752
   9  SideChain-H 1159   12332
  10  Prot-Masses 4746   14752
  11  non-Protein551244745   59870
  12  Other  187664745   23512
  13  NMA647454750
  14  POPC   187604753   23512
  15  CL39   23513   23585
  16  NA34   23518   23551
  17  Ion   73   23513   23585
  18  NMA647454750
  19  POPC   187604753   23512
  20  CL39   23513   23585
  21  NA34   23518   23551
  22  Water  36285   23586   59870
  23  SOL36285   23586   59870
  24  non-Water  23585   1   23585
  25  Water_and_ions 36358   23586   23585


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Using AMBER FF with GROMACS

2013-02-06 Thread Albert

On 02/06/2013 01:15 PM, Berk Hess wrote:

Hi,

All AMBER force fields in Gromacs which are also available in AMBER have been 
validated against energies from the AMBER package.

Cheers,

Berk


How about the latest Amber 12 SB FF? When will it be available in Gromacs?
And also the latest CHARMM36 FF for protein? Currently, there is only 
CHARMM36 FF for lipids. It seems that the CHARMM36 FF for protein 
introduced the nbfix term which is absent in any previous version of 
CHARMM. probably this would take sometime to be introduced to Gromacs.


regards
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_membed deprecated?

2013-02-06 Thread Albert

hello:

 I am trying to build membrane system with g_menbed, but it said:

Back Off! I just backed up membed.dat to ./#membed.dat.2#
You can membed your protein now by:
mdrun -s input.tpr -membed membed.dat -o traj.trr -c membed.pdb -e 
ener.edr -nt 1 -cpt -1 -mn index.ndx -mp merged.top -v -stepout 100

Please cite:
Wolf et al, J Comp Chem 31 (2010) 2169-2174.


does it means that g_membed deprecated from Gromacs-4.6 and we must use 
mdrun instead?


THX
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_membed deprecated?

2013-02-06 Thread Albert

On 02/06/2013 05:37 PM, Namita Dube wrote:

Hi,
There must be some kind of problem with your system.
have you tried using :
mdrun -s input.tpr -membed membed.dat -o traj.trr -c membed.pdb -e ener.edr
-nt 1 -cpt -1 -mn index.ndx -mp merged.top -v -stepout 100
what it says?

Thanks.



it stopped with errors.

not eough space for XTC?



However, when I run it in gromacs-4.5.6 by g_membed, it finished well.

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] problems for GPU simulations

2013-02-07 Thread Albert

Hello:

 I got a workstation with two GTX590 which have two core for each GPU. 
I can submit gromacs GPU jobs with command:


mpirun  -np 4 mdrun .

with such running, I can get 26ns/day for Gromacs-4.6 beta version.

However, I found that for Gromacs-4.6 final version (which is the latest 
one), it claimed that I only have two GPU, it asked me to adjust -np to 2.


so I submit the jobs with command:

mpirun -np 2 mdrun...

for the same system with the same paramters (of course the tpr file must 
be regenerated). I found that I can get only half of the speed, 
something around 10 ns/day.



So I am just wondering what's happening?

thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problems for GPU simulations

2013-02-07 Thread Albert

On 02/07/2013 11:03 AM, James Starlight wrote:

Hi Albert!


As I understood your correctly you have run simulations with your 2
GPU cards on Gromacs-beta but could not do it with final version
havent it?
not really. both versions could run with GPU. The 4.6 beta recognize my 
number of GPU as  4, but final version as 2. And the efficiency for beta 
is double comparing with final version.





Could you tell me how you installed both GPU in your work-station?
Have you used SLI ? ( I've heard that gromacs is not suported the
simulation in multi-GPU regime so I'll be very happy if it's not true
:))

both GPU version were compiled with the same command:

cmake .. -DGMX_MPI=ON -DCMAKE_CXX_COMPILER=/soft/openmpi-1.4.3/bin/mpiCC 
-DCMAKE_C_COMPILER=/soft/openmpi-1.4.3/bin/mpicc 
-DCMAKE_INSTALL_PREFIX=/soft/gromacs4.6beta3 -DGMX_GPU=OFF 
-DBUILD_SHARED_LIBS=OFF -DCMAKE_PREFIX_PATH=/soft/fftw-3.3.3


I don't think I used SLI



Also could you tell me about configuration of your workstation in more
detailes ? ( what cpu and mb you  use ? )
there are 16 GB for the memory,  I've got intel I7-960 for the 
workstation. I don't specify how many core should be used, but both 
cases occupy full CPU resources automatically.






James


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problems for GPU simulations

2013-02-07 Thread Albert

On 02/07/2013 11:28 AM, James Starlight wrote:

Also could you tell me what your system has performance (in gflops)
and what system you have simulated on it (average atom number,
presence of explicit membrane etc)?


it is something around 55,000 atoms with explicit membrane. I am using 
Slipids FF which is compatible with Amber FF. This is good for ligand 
topology.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problems for GPU simulations

2013-02-07 Thread Albert

On 02/07/2013 01:34 PM, Szilárd Páll wrote:

Please make sure that nvididia-smi or the deviceQuery SDK tool show all
four GPUs. If that is the case and mdrun still shows only two, please file
a bug report with you OS info and a log file attached.

Cheers,
--
Szilárd


no, it showed two. I don't know why it only work in beta version it can 
recognize 4 GPU, but in final version only 2 The fact is that the 
beta version use -np 4 can get double speed. The GTX590 have two core, 
so two GPU have 4 core.



here is the log for nvidia-sim:

Thu Feb  7 17:27:10 2013
+--+
| NVIDIA-SMI 2.285.05   Driver Version: 285.05.33 |
|---+--+--+
| Nb.  Name | Bus IdDisp.  | Volatile ECC SB 
/ DB |
| Fan   Temp   Power Usage /Cap | Memory Usage | GPU Util. 
Compute M. |

|===+==+==|
| 0.  GeForce GTX 590   | :0C:00.0  N/A| N/AN/A |
|   0%   55 C  N/A   N/A /  N/A |  22%  336MB / 1535MB |  N/A Default|
|---+--+--|
| 1.  GeForce GTX 590   | :0B:00.0  N/A| N/AN/A |
|  43%   57 C  N/A   N/A /  N/A |   0%5MB / 1535MB |  N/A Default|
|---+--+--|
| Compute processes: GPU Memory |
|  GPU  PID Process name Usage  |
|=|
|  0.   ERROR: Not 
Supported  |
|  1.   ERROR: Not 
Supported  |

+-+




here is the log for mdrun:


Program mdrun_mpi, VERSION 4.6
Source code file: 
/home/albert/Documents/2013-02-06/gromacs-4.6/src/gmxlib/gmx_detect_hardware.c, 
line: 356


Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes 
and GPUs per node.
mdrun_mpi was started with 4 PP MPI processes per node, but only 2 GPUs 
were detected.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"I Like You. I Will Kill You Last" (Tyler in Fishtank)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 0 out of 4

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problems for GPU simulations

2013-02-08 Thread Albert

Hi:

 thanks for kind comments.
 It works fine now after I recompiled Gromacs carefully.

best
Albert

On 02/08/2013 03:43 AM, Szilárd Páll wrote:

Hi,

If you have two GTX 590-s four devices should show up in nvidia-smi and
mdrun should also show four devices detected. As nvidia-smi shows only two
GPUs means that one of your cards is not functioning properly.

You can try to check what GPU devices does you operating system "see"
independently form the driver using the lspci command, e.g:
lspci | grep -i ".*VGA.*NVIDIA.*"

If you see two PCI devices in this output that means that both cards are
detected by the operating system. If nvidia-smi does not show all four
GPUs, there must be something wrong with your driver.

Cheers,

--
Szilárd


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] 4.6 seems improved the efficiency

2013-02-09 Thread Albert

Hello :

 I found the new released 4.6 probably improved very obviously for the 
efficiency. With the same system, I used 4.5.5 it can get 26 ns/day (144 
CPU, 55,000 atoms, Amber FF) with g_tume_pme optimization. Now I used 
4.6, even without g_tune_pme, the  pme mesh/force can get 0.8-1.0 with 
efficiency 32 ns/day. That's really nice.


I don't know whether other users have similar experiences for this new 
version.


Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] velocity was not present from trjconv

2013-02-12 Thread Albert

Hello:

 I am using Gromacs4.6 and I extract one of my frame into .gro file by 
command:


trjconv_mpi -f md.xtc -s md.tpr -dump 25000 -o md.gro

I found that the velocity information was not present in this 
25ns-md.gro file:




Generated by trjconv : Protein t= 25000.0
54178
1TYR  N1   1.696   3.993   8.140
1TYR H12   1.658   4.068   8.196
1TYR H23   1.719   4.031   8.050
1TYR H34   1.630   3.919   8.122
1TYR CA5   1.822   3.938   8.210
1TYR HA6   1.865   4.020   8.268
1TYR CB7   1.790   3.814   8.300
1TYRHB18   1.883   3.770   8.336
1TYRHB29   1.764   3.732   8.233
1TYR CG   10   1.696   3.845   8.416
1TYRCD1   11   1.745   3.930   8.520
1TYRHD1   12   1.847   3.967   8.520
...

Does anybody knows what happen? The final md output md.gro file do 
contains velocity information:


Protein
54178
1TYR  N1   1.747   4.039   8.153 -0.1860  0.0829  0.0094
1TYR H12   1.694   4.084   8.226  2.0932 -0.4724  2.1614
1TYR H23   1.804   4.115   8.117 -1.6223  0.9380 -0.5483
1TYR H34   1.682   4.001   8.086  0.6678 -0.6031 -0.4474
1TYR CA5   1.826   3.927   8.204 -0.5364 -0.8546 -0.6273
1TYR HA6   1.892   3.958   8.285  0.3239 -2.3918 -0.7049
1TYR CB7   1.732   3.813   8.243  0.4625 -0.6033 -0.2530
1TYRHB18   1.791   3.725   8.265 -1.7147 -1.9740  0.4749
1TYRHB29   1.660   3.783   8.166 -0.2485  2.0549 -0.7384
1TYR CG   10   1.663   3.842   8.377 -0.0029  0.0693 -0.2885
1TYRCD1   11   1.739   3.893   8.481 -0.5325 -0.8010 -0.1664
1TYRHD1   12   1.841   3.925   8.465  0.0708 -2.2306  0.7255
1TYRCE1   13   1.680   3.925   8.606  0.5957  0.0504 -0.4752
1TYRHE1   14   1.728   3.974   8.689  1.1798 -3.6443  1.5406


THX

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] velocity was not present from trjconv

2013-02-12 Thread Albert

On 02/12/2013 03:19 PM, Justin Lemkul wrote:
Velocities are not stored in .xtc files.  They are stored in .trr 
files, if nstvout != 0 in the .mdp file.


-Justin 


Hi Justin:

 thanks for kind comments.  I used the following settings and I didn't 
generate .trr file:


nstxout= 0; Write coordinates to output .trr file every 2 ps
nstvout= 0; Write velocities to output .trr file every 2 ps
nstfout= 0
nstxtcout = 2
nstenergy= 1; Write energies to output .edr file every 2 ps
nstlog= 1; Write output to .log file every 2 ps


probably that's the reason why it didn't have velocity informations 
My md is still running, I am just wondering, is there any way to extract 
the last snapshot into .gro file with velocity information?


thanks

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] velocity was not present from trjconv

2013-02-12 Thread Albert

On 02/12/2013 03:28 PM, Justin Lemkul wrote:

Extract it from the .cpt file that corresponds to that frame.

-Justin 


thanks a lot for such helpful comments. I found that the md production 
produced two .cpt file:


 state.cpt
state_prev.cpt

I am not sure which one would be the one I need Do you have any idea 
for this?


thanks again for kind helps.

best
Albert


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] velocity was not present from trjconv

2013-02-12 Thread Albert

On 02/12/2013 03:33 PM, Justin Lemkul wrote:




gmxcheck is your friend, as well as the wiki.

http://www.gromacs.org/Documentation/File_Formats/Checkpoint_File

A checkpoint file is always written at the last step of the 
simulation, which seems to be what you were asking for previously.


-Justin



IC. that's really helpful.

thanks a lot

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] can we schedule it?

2013-02-12 Thread Albert

Hello:

 I've got a question for setting of .mdp file for MD productions. The 
.trr file is really huge if we are going to run longer MD simulations. 
In this case, I usually only consider generate .xtc file, but the 
velocity is missed for all steps except the last one.


 So I am just wondering, can specify some parameters in the .mdp file 
so that Gromacs can export a .gro file with velocity information every 
20 ns?


thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU version of GROMACS 4.6 in MacOS cluster

2013-03-01 Thread Albert

The easiest way for solution is to kill MacOS ans switch to Linux.

;-)

Albert


On 03/01/2013 06:03 PM, Szilárd Páll wrote:

Hi George,

As I said before, that just means that most probably the GPU driver is not
compatible with the CUDA runtime (libcudart) that you installed with the
CUDA toolkit. I've no clue about the Mac OS installers and releases, you'll
have to do the research on that. Let us know if you have further
(GROMACS-related) issues.

Cheers,

--
Szilárd


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.6.1 released

2013-03-06 Thread Albert

Hello:

 I am wondering did the forcefiled was updated in this new version? eg: 
did CHARMM36_protein embeded or the CHARMM36_lipids updated?


thank you very much
Albert


On 03/05/2013 08:14 PM, Mark Abraham wrote:

*Hi GROMACS users,

GROMACS 4.6.1 is officially released. It contains numerous bug fixes, some
simulation performance enhancements and some documentation updates. We
encourage all users to upgrade their installations from 4.6.

You can find the code, manual, release notes, installation instructions and
test
suite at the links below.

ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.1.tar.gz
ftp://ftp.gromacs.org/pub/manual/manual-4.6.1.pdf
http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.1.x
http://www.gromacs.org/Documentation/Installation_Instructions
http://gromacs.googlecode.com/files/regressiontests-4.6.1.tar.gz

Happy simulating!

The GROMACS development team*


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  1   2   3   4   >