[gmx-users] RE: MPI runs on a local computer

2013-09-20 Thread Xu, Jianqing
Hi,

Looks like my questions may be too much detailed. Hope someone could give some 
suggestions. If there is a more appropriate List where I should ask these 
questions, I will appreciate if anyone could let me know.

Thanks again,

Jianqing



-Original Message-
From: gmx-users-boun...@gromacs.org [mailto:gmx-users-boun...@gromacs.org] On 
Behalf Of Xu, Jianqing
Sent: 19 September 2013 13:49
To: gmx-users@gromacs.org
Subject: [gmx-users] MPI runs on a local computer


Dear all,

I am learning the parallelization issues from the instructions on Gromacs 
website. I guess I got a rough understanding of MPI, thread-MPI, OpenMP. But I 
hope to get some advice about a correct way to run jobs.

Say I have a local desktop having 16 cores. If I just want to run jobs on one 
computer or a single node (but multiple cores), I understand that I don't have 
to install and use OpenMPI, as Gromacs has its own thread-MPI included already 
and it should be good enough to run jobs on one machine. However, for some 
reasons, OpenMPI has already been installed on my machine, and I compiled 
Gromacs with it by using the flag: "-DGMX_MPI=ON". My questions are:


1.   Can I still use this executable (mdrun_mpi, built with OpenMPI 
library) to run multi-core jobs on my local desktop? Or the default Thread-MPI 
is actually a better option for a single computer or single node (but 
multi-cores) for whatever reasons?

2.   Assuming I can still use this executable, let's say I want to use half 
of the cores (8 cores) on my machine to run a job,

mpirun -np 8 mdrun_mpi -v -deffnm md

a). Since I am not using all the cores, do I still need to "lock" the physical 
cores to use for better performance? Something like "-nt" for Thread-MPI? Or it 
is not necessary?

b). For running jobs on a local desktop, or single node having ...  say 16 
cores, or even 64 cores, should I turn off the "separate PME nodes" (-npme 0)? 
Or it is better to leave as is?

3.   If I want to run two different projects on my local desktop, say one 
project takes 8 cores, the other takes 4 cores (assuming I have enough memory), 
I just submit the jobs twice on my desktop:

nohup mpirun -np 8 mdrun_mpi -v -deffnm md1 >& log1&

nohup mpirun -np 4 mdrun_mpi -v -deffnm md2 >& log2 &

Will this be acceptable ? Will two jobs be competing the resource and 
eventually affect the performance?

Sorry for so many detailed questions, but your help on this will be highly 
appreciated!

Thanks a lot,

Jianqing

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www interface 
or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
To the extent this electronic communication or any of its attachments contain 
information that is not in the public domain, such information is considered by 
MedImmune to be confidential and proprietary. This communication is expected to 
be read and/or used only by the individual(s) for whom it is intended. If you 
have received this electronic communication in error, please reply to the 
sender advising of the error in transmission and delete the original message 
and any accompanying documents from your system immediately, without copying, 
reviewing or otherwise using them for any purpose. Thank you for your 
cooperation.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] MPI runs on a local computer

2013-09-20 Thread Carsten Kutzner
Hi Jianqing,

On Sep 19, 2013, at 2:48 PM, "Xu, Jianqing"  wrote:
> Say I have a local desktop having 16 cores. If I just want to run jobs on one 
> computer or a single node (but multiple cores), I understand that I don't 
> have to install and use OpenMPI, as Gromacs has its own thread-MPI included 
> already and it should be good enough to run jobs on one machine. However, for 
> some reasons, OpenMPI has already been installed on my machine, and I 
> compiled Gromacs with it by using the flag: "-DGMX_MPI=ON". My questions are:
> 
> 
> 1.   Can I still use this executable (mdrun_mpi, built with OpenMPI 
> library) to run multi-core jobs on my local desktop? Or the default 
> Thread-MPI is actually a better option for a single computer or single node 
> (but multi-cores) for whatever reasons?
You can either use OpenMPI or Gromacs build-in thread MPI library. If you only 
want
to run on a single machine, I would recommend recompiling with thread-MPI, 
because 
this is in many cases a bit faster.

> 2.   Assuming I can still use this executable, let's say I want to use 
> half of the cores (8 cores) on my machine to run a job,
> 
> mpirun -np 8 mdrun_mpi -v -deffnm md
> 
> a). Since I am not using all the cores, do I still need to "lock" the 
> physical cores to use for better performance? Something like "-nt" for 
> Thread-MPI? Or it is not necessary?
Depends on whether you get good scaling or not. Compare to a run on 1 core, for 
large
systems the 4 or 8 core parallel runs should be (nearly) 4 or 8 times as fast. 
If 
that is the case, you do not need to worry about pinning.

> 
> b). For running jobs on a local desktop, or single node having ...  say 16 
> cores, or even 64 cores, should I turn off the "separate PME nodes" (-npme 
> 0)? Or it is better to leave as is?
You may want to check with g_tune_pme. Note that the optimum will depend on your
system, and for each MD system you should find that out.

> 
> 3.   If I want to run two different projects on my local desktop, say one 
> project takes 8 cores, the other takes 4 cores (assuming I have enough 
> memory), I just submit the jobs twice on my desktop:
> 
> nohup mpirun -np 8 mdrun_mpi -v -deffnm md1 >& log1&
> 
> nohup mpirun -np 4 mdrun_mpi -v -deffnm md2 >& log2 &
> 
> Will this be acceptable ? Will two jobs be competing the resource and 
> eventually affect the performance?
Make some quick test runs (over a couple of minutes). Then you can check 
the performance of your 8 core run with and without another simulation running.

Best,
  Carsten

> 
> Sorry for so many detailed questions, but your help on this will be highly 
> appreciated!
> 
> Thanks a lot,
> 
> Jianqing
> 
> 
> 
> To the extent this electronic communication or any of its attachments contain 
> information that is not in the public domain, such information is considered 
> by MedImmune to be confidential and proprietary. This communication is 
> expected to be read and/or used only by the individual(s) for whom it is 
> intended. If you have received this electronic communication in error, please 
> reply to the sender advising of the error in transmission and delete the 
> original message and any accompanying documents from your system immediately, 
> without copying, reviewing or otherwise using them for any purpose. Thank you 
> for your cooperation.
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Fatal Error: Residue 'DMP' not found in residue topology database

2013-09-20 Thread Justin Lemkul



On 9/20/13 2:28 AM, Santhosh Kumar Nagarajan wrote:

Hi guys,
  The error I'm getting is as follows
"All occupancies are one
Opening force field file
/usr/local/gromacs/share/gromacs/top/oplsaa.ff/atomtypes.atp
Atomtype 1
Reading residue database... (oplsaa)
Opening force field file
/usr/local/gromacs/share/gromacs/top/oplsaa.ff/aminoacids.rtp
Residue 56
Sorting it all out...
Opening force field file
/usr/local/gromacs/share/gromacs/top/oplsaa.ff/aminoacids.hdb
Opening force field file
/usr/local/gromacs/share/gromacs/top/oplsaa.ff/aminoacids.n.tdb
Opening force field file
/usr/local/gromacs/share/gromacs/top/oplsaa.ff/aminoacids.c.tdb
Processing chain 1 'A' (46 atoms, 1 residues)
There are 0 donors and 0 acceptors
There are 0 hydrogen bonds
Warning: Starting residue DMP1 in chain not identified as Protein/RNA/DNA.
Problem with chain definition, or missing terminal residues.
This chain does not appear to contain a recognized chain molecule.
If this is incorrect, you can edit residuetypes.dat to modify the behavior.
8 out of 8 lines of specbond.dat converted successfully

---
Program pdb2gmx, VERSION 4.5.3
Source code file: resall.c, line: 581

Fatal error:
Residue 'DMP' not found in residue topology database
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---"

And this is the command I used

pdb2gmx -f dmpc.pdb -o processed.gro -water spce -ignh

Force field : OPLS-AA/L all-atom force field (2001 aminoacid dihedrals)



http://www.gromacs.org/Documentation/Errors#Residue_'XXX'_not_found_in_residue_topology_database

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: grompp for minimization: note & warning

2013-09-20 Thread shahab shariati
Dear Tsjerk

Thanks for your reply

Before correcting the gro file, I knew that gro file is fixed format.
I did this correction very carefully.

Part of the gro file before and after correction is as follows:

-
before:
-
   14DOPCN4  755   0.260   1.726   6.354
   14DOPCC5  756   0.263   1.741   6.204
   14DOPCC1  757   0.136   1.777   6.423
   14DOPCC2  758   0.279   1.580   6.384
   14DOPCC3  759   0.383   1.799   6.403
   14DOPCC6  760   0.386   1.685   6.132
   14DOPCP8  761   0.628   1.683   6.064
   14DOPC   OM9  762   0.640   1.548   6.123
   14DOPC  OM10  763   0.747   1.771   6.072
   14DOPC   OS7  764   0.511   1.755   6.145
   14DOPC  OS11  765   0.576   1.681   5.913
   14DOPC   C12  766   0.591   1.806   5.845
   14DOPC   C13  767   0.470   1.901   5.846
   14DOPC  OS14  768   0.364   1.830   5.782
   14DOPC   C15  769   0.247   1.869   5.833
   14DOPC   O16  770   0.238   1.946   5.927
   14DOPC   C17  771   0.123   1.815   5.762
   14DOPC   C34  772   0.490   2.037   5.777
   14DOPC  OS35  773   0.541   2.029   5.644
   14DOPC   C36  774   0.591   2.142   5.593
   14DOPC   O37  775   0.595   2.252   5.646
   14DOPC   C38  776   0.674   2.092   5.476
   14DOPC   C18  777  -0.004   1.897   5.786
   14DOPC   C19  778  -0.138   1.837   5.744
   14DOPC   C20  779  -0.147   1.817   5.593
   14DOPC   C21  780  -0.196   1.678   5.552
   14DOPC   C22  781  -0.181   1.637   5.406
   14DOPC   C23  782  -0.252   1.722   5.301
   14DOPC   C24  783  -0.241   1.664   5.163
   14DOPC   C25  784  -0.267   1.738   5.054
   14DOPC   C26  785  -0.312   1.881   5.044
   14DOPC   C27  786  -0.368   1.918   4.907
   14DOPC   C28  787  -0.266   1.941   4.795
   14DOPC   C29  788  -0.324   2.015   4.674
   14DOPC   C30  789  -0.377   1.920   4.567
   14DOPC   C31  790  -0.377   1.984   4.428
   14DOPC   C32  791  -0.439   1.894   4.321
   14DOPC   C33  792  -0.358   1.890   4.191
   14DOPC   C39  793   0.818   2.145   5.475
   14DOPC   C40  794   0.906   2.056   5.387
   14DOPC   C41  795   1.042   2.123   5.364
   14DOPC   C42  796   1.160   2.029   5.339
   14DOPC   C43  797   1.136   1.965   5.202
   14DOPC   C44  798   1.261   1.897   5.146
   14DOPC   C45  799   1.314   1.786   5.232
   14DOPC   C46  800   1.319   1.658   5.194
   14DOPC   C47  801   1.274   1.602   5.062
   14DOPC   C48  802   1.316   1.457   5.038
   14DOPC   C49  803   1.266   1.407   4.902
   14DOPC   C50  804   1.338   1.469   4.782
   14DOPC   C51  805   1.307   1.406   4.646
   14DOPC   C52  806   1.160   1.394   4.607
   14DOPC   C53  807   1.119   1.442   4.468
   14DOPC   C54  808   0.980   1.407   4.414
-
after:
-
   14DOPCC1  755   0.136   1.777   6.423
   14DOPCC2  756   0.279   1.580   6.384
   14DOPCC3  757   0.383   1.799   6.403
   14DOPCN4  758   0.260   1.726   6.354
   14DOPCC5  759   0.263   1.741   6.204
   14DOPCC6  760   0.386   1.685   6.132
   14DOPC   OS7  761   0.511   1.755   6.145
   14DOPCP8  762   0.628   1.683   6.064
   14DOPC   OM9  763   0.640   1.548   6.123
   14DOPC  OM10  764   0.747   1.771   6.072
   14DOPC  OS11  765   0.576   1.681   5.913
   14DOPC   C12  766   0.591   1.806   5.845
   14DOPC   C13  767   0.470   1.901   5.846
   14DOPC  OS14  768   0.364   1.830   5.782
   14DOPC   C15  769   0.247   1.869   5.833
   14DOPC   O16  770   0.238   1.946   5.927
   14DOPC   C17  771   0.123   1.815   5.762
   14DOPC   C18  772  -0.004   1.897   5.786
   14DOPC   C19  773  -0.138   1.837   5.744
   14DOPC   C20  774  -0.147   1.817   5.593
   14DOPC   C21  775  -0.196   1.678   5.552
   14DOPC   C22  776  -0.181   1.637   5.406
   14DOPC   C23  777  -0.252   1.722   5.301
   14DOPC   C24  778  -0.241   1.664   5.163
   14DOPC   C25  779  -0.267   1.738   5.054
   14DOPC   C26  780  -0.312   1.881   5.044
   14DOPC   C27  781  -0.368   1.918   4.907
   14DOPC   C28  782  -0.266   1.941   4.795
   14DOPC   C29  783  -0.324   2.015   4.674
   14DOPC   C30  784  -0.377   1.920   4.567
   14DOPC   C31  785  -0.377   1.984   4.428
   14DOPC   C32  786  -0.439   1.894   4.321
   14DOPC   C33  787  -0.358   1.890   4.191
   14DOPC   C34  788   0.490   2.037   5.777
   14DOPC  OS35  789   0.541   2.029   5.644
   14DOPC   C36  790   0.591   2.142   5.593
   14DOPC   O37  791   0.595   2.252   5.646
   14DOPC   C38  792   0.674   2.092   5.476
   14DOPC   C39  793   0.818   2.145   5.475
   14DOPC   C40  794   0.906   2.056   5.387
   14DOPC   C41  795   1.042   2.123   5.364
   14DOPC   C42  796   1.160   2.029   5.339
   14DOPC   C43  797   1.136   1.965   5.202
   14DOPC   C44  798   1.261   1.897   5.146
   14DOPC   C45  799   1.314   1.786   5.232
   14DOPC   C46  800   1.319   1.658   5.194
   14DOPC   C47  801   1.274   1.602   5.062
   14DOPC   C48  802   1.316   1.457   5.038

Re: [gmx-users] MPI runs on a local computer

2013-09-20 Thread Mark Abraham
On Thu, Sep 19, 2013 at 2:48 PM, Xu, Jianqing  wrote:
>
> Dear all,
>
> I am learning the parallelization issues from the instructions on Gromacs 
> website. I guess I got a rough understanding of MPI, thread-MPI, OpenMP. But 
> I hope to get some advice about a correct way to run jobs.
>
> Say I have a local desktop having 16 cores. If I just want to run jobs on one 
> computer or a single node (but multiple cores), I understand that I don't 
> have to install and use OpenMPI, as Gromacs has its own thread-MPI included 
> already and it should be good enough to run jobs on one machine. However, for 
> some reasons, OpenMPI has already been installed on my machine, and I 
> compiled Gromacs with it by using the flag: "-DGMX_MPI=ON". My questions are:
>
>
> 1.   Can I still use this executable (mdrun_mpi, built with OpenMPI 
> library) to run multi-core jobs on my local desktop?

Yes

> Or the default Thread-MPI is actually a better option for a single computer 
> or single node (but multi-cores) for whatever reasons?

Yes - lower overhead.

> 2.   Assuming I can still use this executable, let's say I want to use 
> half of the cores (8 cores) on my machine to run a job,
>
> mpirun -np 8 mdrun_mpi -v -deffnm md
>
> a). Since I am not using all the cores, do I still need to "lock" the 
> physical cores to use for better performance? Something like "-nt" for 
> Thread-MPI? Or it is not necessary?

You will see improved performance if you set the thread affinity.
There is no advantage in allowing the threads to move.

> b). For running jobs on a local desktop, or single node having ...  say 16 
> cores, or even 64 cores, should I turn off the "separate PME nodes" (-npme 
> 0)? Or it is better to leave as is?

Depends, but usually best to use separate PME nodes. Try g_tune_pme,
as Carsten suggests.

> 3.   If I want to run two different projects on my local desktop, say one 
> project takes 8 cores, the other takes 4 cores (assuming I have enough 
> memory), I just submit the jobs twice on my desktop:
>
> nohup mpirun -np 8 mdrun_mpi -v -deffnm md1 >& log1&
>
> nohup mpirun -np 4 mdrun_mpi -v -deffnm md2 >& log2 &
>
> Will this be acceptable ? Will two jobs be competing the resource and 
> eventually affect the performance?

Depends how many cores you have. If you want to share a node between
mdruns, you should specify how many (real- or thread-) MPI ranks for
each run, and how many OpenMP threads per rank, arrange for one thread
per core, and use mdrun -pin and mdrun -pinoffset suitably. You should
expect near linear scaling of each job when you are doing it right -
but learn the behaviour of running one job per node first!

Mark

> Sorry for so many detailed questions, but your help on this will be highly 
> appreciated!
>
> Thanks a lot,
>
> Jianqing
>
>
>
> To the extent this electronic communication or any of its attachments contain 
> information that is not in the public domain, such information is considered 
> by MedImmune to be confidential and proprietary. This communication is 
> expected to be read and/or used only by the individual(s) for whom it is 
> intended. If you have received this electronic communication in error, please 
> reply to the sender advising of the error in transmission and delete the 
> original message and any accompanying documents from your system immediately, 
> without copying, reviewing or otherwise using them for any purpose. Thank you 
> for your cooperation.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: grompp for minimization: note & warning

2013-09-20 Thread Mark Abraham
The UNIX tool diff is your friend for comparing files.

On Fri, Sep 20, 2013 at 1:53 PM, shahab shariati
 wrote:
> Dear Tsjerk
>
> Thanks for your reply
>
> Before correcting the gro file, I knew that gro file is fixed format.
> I did this correction very carefully.
>
> Part of the gro file before and after correction is as follows:
>
> -
> before:
> -
>14DOPCN4  755   0.260   1.726   6.354
>14DOPCC5  756   0.263   1.741   6.204
>14DOPCC1  757   0.136   1.777   6.423
>14DOPCC2  758   0.279   1.580   6.384
>14DOPCC3  759   0.383   1.799   6.403
>14DOPCC6  760   0.386   1.685   6.132
>14DOPCP8  761   0.628   1.683   6.064
>14DOPC   OM9  762   0.640   1.548   6.123
>14DOPC  OM10  763   0.747   1.771   6.072
>14DOPC   OS7  764   0.511   1.755   6.145
>14DOPC  OS11  765   0.576   1.681   5.913
>14DOPC   C12  766   0.591   1.806   5.845
>14DOPC   C13  767   0.470   1.901   5.846
>14DOPC  OS14  768   0.364   1.830   5.782
>14DOPC   C15  769   0.247   1.869   5.833
>14DOPC   O16  770   0.238   1.946   5.927
>14DOPC   C17  771   0.123   1.815   5.762
>14DOPC   C34  772   0.490   2.037   5.777
>14DOPC  OS35  773   0.541   2.029   5.644
>14DOPC   C36  774   0.591   2.142   5.593
>14DOPC   O37  775   0.595   2.252   5.646
>14DOPC   C38  776   0.674   2.092   5.476
>14DOPC   C18  777  -0.004   1.897   5.786
>14DOPC   C19  778  -0.138   1.837   5.744
>14DOPC   C20  779  -0.147   1.817   5.593
>14DOPC   C21  780  -0.196   1.678   5.552
>14DOPC   C22  781  -0.181   1.637   5.406
>14DOPC   C23  782  -0.252   1.722   5.301
>14DOPC   C24  783  -0.241   1.664   5.163
>14DOPC   C25  784  -0.267   1.738   5.054
>14DOPC   C26  785  -0.312   1.881   5.044
>14DOPC   C27  786  -0.368   1.918   4.907
>14DOPC   C28  787  -0.266   1.941   4.795
>14DOPC   C29  788  -0.324   2.015   4.674
>14DOPC   C30  789  -0.377   1.920   4.567
>14DOPC   C31  790  -0.377   1.984   4.428
>14DOPC   C32  791  -0.439   1.894   4.321
>14DOPC   C33  792  -0.358   1.890   4.191
>14DOPC   C39  793   0.818   2.145   5.475
>14DOPC   C40  794   0.906   2.056   5.387
>14DOPC   C41  795   1.042   2.123   5.364
>14DOPC   C42  796   1.160   2.029   5.339
>14DOPC   C43  797   1.136   1.965   5.202
>14DOPC   C44  798   1.261   1.897   5.146
>14DOPC   C45  799   1.314   1.786   5.232
>14DOPC   C46  800   1.319   1.658   5.194
>14DOPC   C47  801   1.274   1.602   5.062
>14DOPC   C48  802   1.316   1.457   5.038
>14DOPC   C49  803   1.266   1.407   4.902
>14DOPC   C50  804   1.338   1.469   4.782
>14DOPC   C51  805   1.307   1.406   4.646
>14DOPC   C52  806   1.160   1.394   4.607
>14DOPC   C53  807   1.119   1.442   4.468
>14DOPC   C54  808   0.980   1.407   4.414
> -
> after:
> -
>14DOPCC1  755   0.136   1.777   6.423
>14DOPCC2  756   0.279   1.580   6.384
>14DOPCC3  757   0.383   1.799   6.403
>14DOPCN4  758   0.260   1.726   6.354
>14DOPCC5  759   0.263   1.741   6.204
>14DOPCC6  760   0.386   1.685   6.132
>14DOPC   OS7  761   0.511   1.755   6.145
>14DOPCP8  762   0.628   1.683   6.064
>14DOPC   OM9  763   0.640   1.548   6.123
>14DOPC  OM10  764   0.747   1.771   6.072
>14DOPC  OS11  765   0.576   1.681   5.913
>14DOPC   C12  766   0.591   1.806   5.845
>14DOPC   C13  767   0.470   1.901   5.846
>14DOPC  OS14  768   0.364   1.830   5.782
>14DOPC   C15  769   0.247   1.869   5.833
>14DOPC   O16  770   0.238   1.946   5.927
>14DOPC   C17  771   0.123   1.815   5.762
>14DOPC   C18  772  -0.004   1.897   5.786
>14DOPC   C19  773  -0.138   1.837   5.744
>14DOPC   C20  774  -0.147   1.817   5.593
>14DOPC   C21  775  -0.196   1.678   5.552
>14DOPC   C22  776  -0.181   1.637   5.406
>14DOPC   C23  777  -0.252   1.722   5.301
>14DOPC   C24  778  -0.241   1.664   5.163
>14DOPC   C25  779  -0.267   1.738   5.054
>14DOPC   C26  780  -0.312   1.881   5.044
>14DOPC   C27  781  -0.368   1.918   4.907
>14DOPC   C28  782  -0.266   1.941   4.795
>14DOPC   C29  783  -0.324   2.015   4.674
>14DOPC   C30  784  -0.377   1.920   4.567
>14DOPC   C31  785  -0.377   1.984   4.428
>14DOPC   C32  786  -0.439   1.894   4.321
>14DOPC   C33  787  -0.358   1.890   4.191
>14DOPC   C34  788   0.490   2.037   5.777
>14DOPC  OS35  789   0.541   2.029   5.644
>14DOPC   C36  790   0.591   2.142   5.593
>14DOPC   O37  791   0.595   2.252   5.646
>14DOPC   C38  792   0.674   2.092   5.476
>14DOPC   C39  793   0.818   2.145   5.475
>14DOPC   C40  794   0.906   2.056   5.387
>14DOPC   C41  795   1.042   

Re: [gmx-users] Re: Charmm 36 forcefield with verlet cut-off scheme

2013-09-20 Thread Mark Abraham
Note that the group scheme does not reproduce the (AFAIK unpublished)
CHARMM switching scheme, either.

Mark

On Fri, Sep 20, 2013 at 4:26 AM, Justin Lemkul  wrote:
>
>
> On 9/19/13 9:55 PM, akk5r wrote:
>>
>> Thanks Justin. I was told that the "vdwtype = switch" was an essential
>> component of running Charmm36. Is that not the case?
>>
>
> It is, but I suppose one can achieve a similar effect with the Verlet
> scheme. You can certainly use the traditional CHARMM settings if you use the
> group scheme, instead.  The vdw-modifier setting should give you a
> comparable result, but I have never tried it myself.
>
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
>
> ==
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_covar average.pdb calculation

2013-09-20 Thread Deniz Aydin
Dear All, 

I would like to get information on how g_covar calculates the average
structure file (average.pdb) 

My aim was actually to get a covariance matrix () so I
started off by writing my own code, I use MDAnalysis package, so I give psf
and traj files as an input and I generate the coordinates for each frame,
and if I have 3 frames, I take the average of each coordinate element for 3
frames. So for the 1st CA atom, I have x, y, z values for 3 frames. So I add
x1, x2, x3 values and divide by the number of frames. So this gives me the
average x coordinate of the 1st CA atom. I do the same for y and z and then
for all CA atoms. So if x1, x2, x3 for CA1 is 49.5 49.0 and 49.4 for 3
frames, the average x i get is 49.3. This is what I call an average
structure. 

After doing this, I wanted to compare this with what g_covar gives me
(average.pdb) but found out that the result that I get from my own
calculations and the result I get from g_covar are very very different. The
g_covar command that I use is the following: 

g_covar -f traj.xtc -s topol.tpr -ascii covar.dat -xpm covar.xpm -noref 

Here I use -noref because I already use trjconv on my initial trajectory to
generate a new trajectory so I do -pbc mol and center and -fit rot+trans to
remove translation, rotation and to fit the structure. So I thought I could
use -nofit in g_covar to not to fit to the reference structure again. 

So, coming back to my question, why would g_covar give me a very different
result than what I find with my simple code? What does g_covar do to
calculate this average structure? I thought maybe it does some fitting or
additional stuff to what I'm doing in my code or changes the units, that in
the end it doesn't give me the same coordinates for the average.pdb.

-
Deniz Aydin, BSc. 
Graduate Student

Chemical & Biological Engineering 
Graduate School of Sciences and Engineering 
Koç University, Istanbul, Turkey
--
View this message in context: 
http://gromacs.5086.x6.nabble.com/g-covar-average-pdb-calculation-tp5011339.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_covar average.pdb calculation

2013-09-20 Thread Tsjerk Wassenaar
Hi Deniz,

The option -ref/-noref is not what you think it is. You want to use -nofit.

Cheers,

Tsjerk


On Fri, Sep 20, 2013 at 2:26 PM, Deniz Aydin  wrote:

> Dear All,
>
> I would like to get information on how g_covar calculates the average
> structure file (average.pdb)
>
> My aim was actually to get a covariance matrix () so I
> started off by writing my own code, I use MDAnalysis package, so I give psf
> and traj files as an input and I generate the coordinates for each frame,
> and if I have 3 frames, I take the average of each coordinate element for 3
> frames. So for the 1st CA atom, I have x, y, z values for 3 frames. So I
> add
> x1, x2, x3 values and divide by the number of frames. So this gives me the
> average x coordinate of the 1st CA atom. I do the same for y and z and then
> for all CA atoms. So if x1, x2, x3 for CA1 is 49.5 49.0 and 49.4 for 3
> frames, the average x i get is 49.3. This is what I call an average
> structure.
>
> After doing this, I wanted to compare this with what g_covar gives me
> (average.pdb) but found out that the result that I get from my own
> calculations and the result I get from g_covar are very very different. The
> g_covar command that I use is the following:
>
> g_covar -f traj.xtc -s topol.tpr -ascii covar.dat -xpm covar.xpm -noref
>
> Here I use -noref because I already use trjconv on my initial trajectory to
> generate a new trajectory so I do -pbc mol and center and -fit rot+trans to
> remove translation, rotation and to fit the structure. So I thought I could
> use -nofit in g_covar to not to fit to the reference structure again.
>
> So, coming back to my question, why would g_covar give me a very different
> result than what I find with my simple code? What does g_covar do to
> calculate this average structure? I thought maybe it does some fitting or
> additional stuff to what I'm doing in my code or changes the units, that in
> the end it doesn't give me the same coordinates for the average.pdb.
>
> -
> Deniz Aydin, BSc.
> Graduate Student
>
> Chemical & Biological Engineering
> Graduate School of Sciences and Engineering
> Koç University, Istanbul, Turkey
> --
> View this message in context:
> http://gromacs.5086.x6.nabble.com/g-covar-average-pdb-calculation-tp5011339.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
Tsjerk A. Wassenaar, Ph.D.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Significant slowdown in 4.6? (4.6.3)

2013-09-20 Thread Jonathan Saboury
I have a Intel i7-2630QM CPU @ 2.00GHz on my laptop with 4.6.3 installed
and a desktop with an i3-3220 with 4.5.5 installed.

I am trying the same energy minimization on each of these machines. My
desktop takes a few seconds, my laptop takes hours. This doesn't make much
sense bc benchmarks indicated that my laptop should be faster.

Only conclusion I can come up with is that it is the difference in
versions. Any other explanations?

Here are the files I am using: http://www.sendspace.com/file/a1cdch

List of commands used are found in: commands.txt

I use the last command "mdrun -v -deffnm em" on each machine, and the files
were built with 4.6.3 (my laptop).

If you need any more information please let me know.

Thank you!

-Jonathan
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Significant slowdown in 4.6? (4.6.3)

2013-09-20 Thread Jonathan Saboury
Figured out the problem. For some reason one thread is being taken up 90%
by the system. If I run it with 6 threads it runs fast. Never experienced
this on linux though, very curious.

Sorry if i wasted your time.

-Jonathan Saboury


On Fri, Sep 20, 2013 at 7:58 AM, Jonathan Saboury  wrote:

> I have a Intel i7-2630QM CPU @ 2.00GHz on my laptop with 4.6.3 installed
> and a desktop with an i3-3220 with 4.5.5 installed.
>
> I am trying the same energy minimization on each of these machines. My
> desktop takes a few seconds, my laptop takes hours. This doesn't make much
> sense bc benchmarks indicated that my laptop should be faster.
>
> Only conclusion I can come up with is that it is the difference in
> versions. Any other explanations?
>
> Here are the files I am using: http://www.sendspace.com/file/a1cdch
>
> List of commands used are found in: commands.txt
>
> I use the last command "mdrun -v -deffnm em" on each machine, and the
> files were built with 4.6.3 (my laptop).
>
> If you need any more information please let me know.
>
> Thank you!
>
> -Jonathan
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Minimum distance periodic images, protein simulation

2013-09-20 Thread Arun Sharma
Hello,
I ran a 100-ns long simulation of a small protein (trp-cage) at an elevated 
temperature. I analysed the distance between periodic images using

g_mindist -f md-run-1-noPBC.xtc -s md-run-1.tpr -n index.ndx -od mindist.xvg 
-pi 

The output shows that there are situations when the closest distance between 
certain atoms is much lesser than 1 nm. Conventional wisdom says that if this 
happens the simulation results are questionable. Is this completely true? If 
this is indeed true, how would I ensure that this does not happen again? 

I have posted the output of g_mindist at http://postimg.org/image/bnc0ej3nb/

Any comments and clarifications are highly appreciated

Thanks,
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Broken lipid molecules

2013-09-20 Thread Rama
HI,

At the end of a MD run, the lipid molecules in a membrane protein are
broken. I load .gro and .trr file into VMD to watch MD simulations, the
lipids are broken at periodic boundaries.

I try to fix it by trjconv -pbc nojump but output came with only 2 frames
but initially it was 1500 frames.

How to fix whole MD trajectory?

Thanks
Rama

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Broken-lipid-molecules-tp5011344.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Broken lipid molecules

2013-09-20 Thread Justin Lemkul



On 9/20/13 5:21 PM, Rama wrote:

HI,

At the end of a MD run, the lipid molecules in a membrane protein are
broken. I load .gro and .trr file into VMD to watch MD simulations, the
lipids are broken at periodic boundaries.

I try to fix it by trjconv -pbc nojump but output came with only 2 frames
but initially it was 1500 frames.



Either something is wrong with the trajectory or something is wrong with the 
command you gave.  Based on the information at hand, no one can say.



How to fix whole MD trajectory?



trjconv -pbc mol

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Minimum distance periodic images, protein simulation

2013-09-20 Thread Justin Lemkul



On 9/20/13 4:11 PM, Arun Sharma wrote:

Hello,
I ran a 100-ns long simulation of a small protein (trp-cage) at an elevated 
temperature. I analysed the distance between periodic images using

g_mindist -f md-run-1-noPBC.xtc -s md-run-1.tpr -n index.ndx -od mindist.xvg -pi

The output shows that there are situations when the closest distance between 
certain atoms is much lesser than 1 nm. Conventional wisdom says that if this 
happens the simulation results are questionable. Is this completely true? If 
this is indeed true, how would I ensure that this does not happen again?



Using a sufficiently large box (minimum solute-box distance at least equal to 
the longest cutoff) is the general procedure.



I have posted the output of g_mindist at http://postimg.org/image/bnc0ej3nb/



Some configurations definitely come very close, indicating several frames with 
spurious forces throughout the duration of the trajectory.  I would be very 
suspicious of the results.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists