[gmx-users] GPU problem

2013-06-04 Thread Albert

Dear:

 I've got four GPU in one workstation. I am trying to run two GPU job 
with command:


mdrun -s md.tpr -gpu_id 01
mdrun -s md.tpr -gpu_id 23

there are 32 CPU in this workstation. I found that each job trying to 
use the whole CPU, and there are 64 sub job when these two GPU mdrun 
submitted.  Moreover, one of the job stopped after short of running, 
probably because of the CPU issue.


I am just wondering, how can we distribute CPU when we run two GPU job 
in a single workstation?


thank you very much

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] problem using g_lie

2013-06-04 Thread khushboo bafna
hii

I ran a protein ligand simulation and only ligand simualtion in water using
GROMACS 4.5.4
I want to find the binding energy of the ligand and used g_lie.

I tried to run g_lie on ligand simulation in water and got the following
result

Opened md_1.edr as single precision energy file
Using the following energy terms:
LJ:
Coul:

Back Off! I just backed up lie.xvg to ./#lie.xvg.3#
Last energy frame read 2500 time 5000.000
DGbind = -0.681 (-nan)

Can anyone tell me what is the problem and how to go about with this
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About Physaica Parameter

2013-06-04 Thread vidhya sankar
Dear Justin & Mark Thank you for your previous reply
 
    I 
am doing Simulation of CNT wrapped By Cyclic Peptide in Water For that I have 
used the Parameters as follows for my production run
ns_type        = grid       
nstlist        = 5           
rlist        = 1.8      
rcoulomb    = 1.2       
vdwtype = Shift
rvdw        = 1.2
coulombtype    =  Reaction-Field-zero    
pme_order    = 4     
fourierspacing    = 0.16
pcoupl        = Parrinello-Rahman    
pcoupltype    = semiisotropic (Because My  system is Composed of both pure  
Hydrophobic(CNT) & Hydrophilic IProtein) part and Immersed in water) 
tau_p        = 2.0                
ref_p        = 1.0    1.0          
compressibility = 4.5e-5    4.5e-5   

It run well
But How to check is all these parameters are wright or wrong?
Is  Theer is Any thumb rule to Assign parameters ?

Because My system is Composed of both pure  Hydrophobic(CNT) & Hydrophilic 
IProtein) part and Immersed in water so I have Selected semiisotropic

What are the parameters Should i Concentrate and How to choose its value  
carefully & rationally ?
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU problem

2013-06-04 Thread Chandan Choudhury
Hi Albert,

I think using -nt flag (-nt=16) with mdrun would solve your problem.

Chandan


--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Tue, Jun 4, 2013 at 12:56 PM, Albert  wrote:

> Dear:
>
>  I've got four GPU in one workstation. I am trying to run two GPU job with
> command:
>
> mdrun -s md.tpr -gpu_id 01
> mdrun -s md.tpr -gpu_id 23
>
> there are 32 CPU in this workstation. I found that each job trying to use
> the whole CPU, and there are 64 sub job when these two GPU mdrun submitted.
>  Moreover, one of the job stopped after short of running, probably because
> of the CPU issue.
>
> I am just wondering, how can we distribute CPU when we run two GPU job in
> a single workstation?
>
> thank you very much
>
> best
> Albert
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Searchbefore
>  posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read 
> http://www.gromacs.org/**Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] REMD run showing temperature range more than the equilibrated.

2013-06-04 Thread suhani nagpal
Hi all !

well, I'm working on REMD with 96 replicas, with temperature range 280K to
425.04K.

The NVT equilibration works well and graphs plotted show almost the
required temperature after equilibration.

Then, after 3 ns of remd run , the edr -> xvg files show initial
temperature atleast 40 -50 units up , and then gradually reduces to it's
temperature.

for example
replica 0 has 280 temperature , for initial 40 ps, it shows temperature up
till 335K an then, decline.

and around 21st to 22nd replica, the exchange probability is varying a lot
from 20% to 60%.

so, my queries are

 how to resolve this temperature issue and why is the exchange probability
so abrupt at 21-22 replica ?


Thanks
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About Physaica Parameter

2013-06-04 Thread Justin Lemkul



On 6/4/13 5:06 AM, vidhya sankar wrote:

Dear Justin & Mark Thank you for your previous reply

 I 
am doing Simulation of CNT wrapped By Cyclic Peptide in Water For that I have 
used the Parameters as follows for my production run
ns_type= grid
nstlist= 5
rlist= 1.8
rcoulomb= 1.2
vdwtype = Shift
rvdw= 1.2
coulombtype=  Reaction-Field-zero
pme_order= 4
fourierspacing= 0.16
pcoupl= Parrinello-Rahman
pcoupltype= semiisotropic (Because My  system is Composed of both pure  
Hydrophobic(CNT) & Hydrophilic IProtein) part and Immersed in water)
tau_p= 2.0
ref_p= 1.01.0
compressibility = 4.5e-54.5e-5

It run well
But How to check is all these parameters are wright or wrong?
Is  Theer is Any thumb rule to Assign parameters ?



The cutoffs look bizarre.  They should be set based on the parent force field.


Because My system is Composed of both pure  Hydrophobic(CNT) & Hydrophilic 
IProtein) part and Immersed in water so I have Selected semiisotropic



That does not make sense.  Semiisotropic coupling is for systems that deform in 
x-y and z independently, not due to the chemical nature of the molecules in the 
system.



What are the parameters Should i Concentrate and How to choose its value  carefully 
& rationally ?



Base your settings on a thorough understanding of your chosen force field and an 
examination of the literature for other algorithms to understand their benefits 
and limitations.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problem using g_lie

2013-06-04 Thread Justin Lemkul



On 6/4/13 3:56 AM, khushboo bafna wrote:

hii

I ran a protein ligand simulation and only ligand simualtion in water using
GROMACS 4.5.4
I want to find the binding energy of the ligand and used g_lie.

I tried to run g_lie on ligand simulation in water and got the following
result

Opened md_1.edr as single precision energy file
Using the following energy terms:
LJ:
Coul:

Back Off! I just backed up lie.xvg to ./#lie.xvg.3#
Last energy frame read 2500 time 5000.000
DGbind = -0.681 (-nan)

Can anyone tell me what is the problem and how to go about with this



To use g_lie, you need to be analyzing an .edr file that comes from the 
protein-ligand simulation, providing the values of the ligand-water interactions 
to -Elj and -Eqq, which are simply extracted from the ligand-water .edr using 
g_energy.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] High Initial generated Temp

2013-06-04 Thread tarak karmakar
Dear All,

Although I have set gen_temp = 300, it is showing the initial temperature
445.7 K, generated at the very beginning  of the run.

gen_vel = yes ; velocity generation
gen_temp= 300
gen_seed= 93873959697

Is it because of a bad geometry? In mailing list I came across this thread
[ http://comments.gmane.org/gmane.science.biology.gromacs.user/41931]
Prior to this production run, I heated my system slowly from 0 K to 300 K
within 300 ps time span.
It would be very helpful if someone suggests me the way to deal with this
problem?

 Thanks,
Tarak
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] High Initial generated Temp

2013-06-04 Thread Justin Lemkul



On 6/4/13 8:07 AM, tarak karmakar wrote:

Dear All,

Although I have set gen_temp = 300, it is showing the initial temperature
445.7 K, generated at the very beginning  of the run.

gen_vel = yes ; velocity generation
gen_temp= 300
gen_seed= 93873959697

Is it because of a bad geometry? In mailing list I came across this thread
[ http://comments.gmane.org/gmane.science.biology.gromacs.user/41931]
Prior to this production run, I heated my system slowly from 0 K to 300 K
within 300 ps time span.
It would be very helpful if someone suggests me the way to deal with this
problem?



A full .mdp file is always more useful than a small snippet.  I have seen this 
same behavior when the "continuation" parameter is incorrectly set - have you 
used "continuation = yes" in your .mdp file?  If not, the constraints get messed 
up and your initial velocities can get all messed up.


A larger point is this - why are you re-generating velocities after heating the 
system from 0 to 300 K?  Why not simply preserve the ensemble at 300 K by using 
a .cpt file?


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About Physical parameters

2013-06-04 Thread vidhya sankar


Dear Justin Thank you for your Previuos reply

    
I am using gromos53a6 ff When i changed the parameters for cut-off  (r list ) 
value to 1.2 

I have got Error as follows  What is the Meaning of Note 2 & 3


NOTE 2 [file cntcycpepfull2.mdp]:
  The switch/shift interaction settings are just for compatibility; you
  will get betterperformance from applying potential modifiers to your
  interactions!

NOTE 3 [file cntcycpepfull2.mdp]:
  For energy conservation with switch/shift potentials, rlist should be 0.1
  to 0.3 nm larger than rcoulomb.
NOTE 4 [file cntcycpepfull2.mdp]:
  For energy conservation with switch/shift potentials, rlist should be 0.1
  to 0.3 nm larger than rvdw.

WARNING 1 [file cntcycpepfull2.mdp]:
  The sum of the two largest charge group radii (0.518710) is larger than
  rlist (1.20) - rvdw (1.20)

WARNING 2 [file cntcycpepfull2.mdp]:
  The sum of the two largest charge group radii (0.518710) is larger than
  rlist (1.20) - rcoulomb (1.20)

nstlist= 5 

rlist= 1.2
rcoulomb= 1.2 

vdwtype = Shift
rvdw= 1.2
coulombtype=  Reaction-Field-zero 

pme_order= 4
fourierspacing= 0.16
pcoupl= Parrinello-Rahman 

pcoupltype= isotropic   

thus .grompp terminated due to warnings

To Avoid his I have used the rlist=1.8 (in Above valuue) Because the difference 
betwen rlist-rcoloum should Be greater than 0.518710 

But you maild me the cut-off looks Bizarre .it should be based on parent force 
field(mine is gromos53a6)

I hope  your  kind Suugestion are useful to get succesfull production mdrun
Thanks In ADVANCE
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About Physical parameters

2013-06-04 Thread Justin Lemkul



On 6/4/13 8:22 AM, vidhya sankar wrote:



Dear Justin Thank you for your Previuos reply


 I am using gromos53a6 ff When i changed the parameters for cut-off  (r list ) 
value to 1.2

I have got Error as follows  What is the Meaning of Note 2 & 3


NOTE 2 [file cntcycpepfull2.mdp]:
   The switch/shift interaction settings are just for compatibility; you
   will get betterperformance from applying potential modifiers to your
   interactions!

NOTE 3 [file cntcycpepfull2.mdp]:
   For energy conservation with switch/shift potentials, rlist should be 0.1
   to 0.3 nm larger than rcoulomb.
NOTE 4 [file cntcycpepfull2.mdp]:
   For energy conservation with switch/shift potentials, rlist should be 0.1
   to 0.3 nm larger than rvdw.

WARNING 1 [file cntcycpepfull2.mdp]:
   The sum of the two largest charge group radii (0.518710) is larger than
   rlist (1.20) - rvdw (1.20)

WARNING 2 [file cntcycpepfull2.mdp]:
   The sum of the two largest charge group radii (0.518710) is larger than
   rlist (1.20) - rcoulomb (1.20)

nstlist= 5

rlist= 1.2
rcoulomb= 1.2

vdwtype = Shift
rvdw= 1.2
coulombtype=  Reaction-Field-zero

pme_order= 4
fourierspacing= 0.16
pcoupl= Parrinello-Rahman

pcoupltype= isotropic

thus .grompp terminated due to warnings

To Avoid his I have used the rlist=1.8 (in Above valuue) Because the difference 
betwen rlist-rcoloum should Be greater than 0.518710

But you maild me the cut-off looks Bizarre .it should be based on parent force 
field(mine is gromos53a6)



If you're using Gromos96 53A6, nearly all of your settings are wrong.  Please 
refer to the primary literature for the force field and the many posts in the 
list archive regarding proper use of that force field.  Making ad hoc changes 
just to get grompp to stop complaining is a very error-prone way to do 
simulations (which end up being unstable or invalid by doing so).


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Can we connect two boxes together for the simulation?

2013-06-04 Thread Bao Kai
Hi, all,

I want to do NPT simulations with different compositions first. Then I want
to connect the two boxes to continue the NPT simulation.

I mean, after simulations, we get two boxes with different compositions.

Can we do that with gromacs? or how can we do that?

Thanks.

Best,
Kai
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: gmx 4.6 mpi installation through openmpi?

2013-06-04 Thread escajarro
I received this answer:

Mark Abraham Mon, 03 Jun 2013 08:02:56 -0700

That looks like there's a PGI compiler getting used at some point (perhaps
the internal FFTW build is picking up the CC environment var? I forget how
that gets its compiler!). If you do find * -name config.log then perhaps
you can see in that file what the FFTW build thinks about its compiler.

Mark

I found the config.log file and checked that the compiler being used to
compile the internal FFTW is the same version of gcc that is used to compile
the rest of Gromacs. In fact, I set the environment variable CC to gcc. I
also tried to clean and compile from scratch the FFTW, but no success.

I also tried to specify with the environment variable CMAKE_LIBRARY_PATH
where the FFTW library is (in my case in
gromacs-4.6/src/contrib/fftw/gmxfftw-prefix/lib), but I obtained the same
error. 

Any other idea?

Thanks



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/gmx-4-6-mpi-installation-through-openmpi-tp5006975p5008782.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-04 Thread João Henriques
Dear all,

Since gmx-4.6 came out, I've been particularly interested in taking
advantage of the native GPU acceleration for my simulations. Luckily, I
have access to a cluster with the following specs PER NODE:

CPU
2 E5-2650 (2.0 Ghz, 8-core)

GPU
2 Nvidia K20

I've become quite familiar with the "heterogenous parallelization" and
"multiple MPI ranks per GPU" schemes on a SINGLE NODE. Everything works
fine, no problems at all.

Currently, I'm working with a nasty system comprising 608159 tip3p water
molecules and it would really help to accelerate things up a bit.
Therefore, I would really like to try to parallelize my system over
multiple nodes and keep the GPU acceleration.

I've tried many different command combinations, but mdrun seems to be blind
towards the GPUs existing on other nodes. It always finds GPUs #0 and #1 on
the first node and tries to fit everything into these, completely
disregarding the existence of the other GPUs on the remaining requested
nodes.

Once again, note that all nodes have exactly the same specs.

Literature on the official gmx website is not, well... you know... in-depth
and I would really appreciate if someone could shed some light into this
subject.

Thank you,
Best regards,

-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q

2013-06-04 Thread Jianguo Li
Dear All,


Has anyone has Gromacs benchmark on Bluegene/Q? 
I recently installed gromacs-461 on BG/Q using the following command:
cmake .. -DCMAKE_TOOLCHAIN_FILE=BlueGeneQ-static-XL-C \
  -DGMX_BUILD_OWN_FFTW=ON \
 -DBUILD_SHARED_LIBS=OFF \
 -DGMX_XML=OFF \
 -DCMAKE_INSTALL_PREFIX=/scratch/home/biilijg/package/gromacs-461
make
make install

After that, I did a benchmark simulation using a box of pure water containing 
140k atoms. 
The command I used for the above test is:
srun --ntasks-per-node=32 --overcommit 
/scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c 
box_md1.gro -x box_md1.xtc -g md1.log >& job_md1

And I got the following performance:
Num. cores   hour/ns
128   9.860
256  4.984
512  2.706
1024    1.544
2048    0.978
4092    0.677

The scaling seems ok, but the performance is far from what I expected. In terms 
CPU-to-CPU performance, the Bluegene is 8 times slower than other clusters. For 
comparison, I also did the same simulation using 64 processors in a SGI 
cluster, and I got 2.8 hour/ns, which is roughly equivalent to using 512 cores 
in BlueGene/Q. 

I am wondering if the above benchmark results are reasonable or not? Or Am I 
doing something wrong in compiling?
Any comments/suggestions are appreciated, thank you very much!

Have a nice day!
Jianguo 

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to add sodium acetate

2013-06-04 Thread maggin
Hi, when we add Nacl, can use -pname NA -nname cl in GMX

so for sodium acetate, how to display it ?

Thank you very much!

maggin




--
View this message in context: 
http://gromacs.5086.x6.nabble.com/how-to-add-sodium-acetate-tp5008786.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Can we connect two boxes together for the simulation?

2013-06-04 Thread Dr. Vitaly Chaban
editconf is a nice tool to create vacuum in your box. You can then insert
one of your box into another box using cat box1.gro box2.gro, just remove
the very last line in box1.gro.

Dr. Vitaly Chaban









On Tue, Jun 4, 2013 at 2:46 PM, Bao Kai  wrote:

> Hi, all,
>
> I want to do NPT simulations with different compositions first. Then I want
> to connect the two boxes to continue the NPT simulation.
>
> I mean, after simulations, we get two boxes with different compositions.
>
> Can we do that with gromacs? or how can we do that?
>
> Thanks.
>
> Best,
> Kai
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Can we connect two boxes together for the simulation?

2013-06-04 Thread Bao Kai
Hi,


I guess the renumbering of the atoms and molecules will be a problem,
especially when the two boxes contain the same type of the molecules.


How can we handle that?


Thanks.


Best,

Kai




editconf is a nice tool to create vacuum in your box. You can then insert
one of your box into another box using cat box1.gro box2.gro, just remove
the very last line in box1.gro.

Dr. Vitaly Chaban









On Tue, Jun 4, 2013 at 2:46 PM, Bao Kai http://lists.gromacs.org/mailman/listinfo/gmx-users>> wrote:

>* Hi, all,*>**>* I want to do NPT simulations with different compositions 
>first. Then I want*>* to connect the two boxes to continue the NPT 
>simulation.*>**>* I mean, after simulations, we get two boxes with different 
>compositions.*>**>* Can we do that with gromacs? or how can we do that?*>**>* 
>Thanks.*>**>* Best,*>* Kai*>* --*>* gmx-users mailing listgmx-users at 
>gromacs.org *>* 
>http://lists.gromacs.org/mailman/listinfo/gmx-users*>* * Please search the 
>archive at*>* http://www.gromacs.org/Support/Mailing_Lists/Search before 
>posting!*>* * Please don't post (un)subscribe requests to the list. Use the*>* 
>www interface or send it to gmx-users-request at gromacs.org. 
>*>* * Can't post? Read 
>http://www.gromacs.org/Support/Mailing_Lists*
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q

2013-06-04 Thread XAvier Periole

BG CPUs are generally much slower (clock whose) but scale better.

You should try to run on 64 CPUs on the Blue gene too for faire comparison. 
The number of CPUs per nodes is also an important factor: the more CPUs per 
nodes the more communications needs to be done. I observed a significant slow 
down while going from 16 to 32 CPUs nodes (recent intel) but using the same 
number of CPUs.

On Jun 4, 2013, at 4:02 PM, Jianguo Li  wrote:

> Dear All,
> 
> 
> Has anyone has Gromacs benchmark on Bluegene/Q? 
> I recently installed gromacs-461 on BG/Q using the following command:
> cmake .. -DCMAKE_TOOLCHAIN_FILE=BlueGeneQ-static-XL-C \
>   -DGMX_BUILD_OWN_FFTW=ON \
>  -DBUILD_SHARED_LIBS=OFF \
>  -DGMX_XML=OFF \
>  -DCMAKE_INSTALL_PREFIX=/scratch/home/biilijg/package/gromacs-461
> make
> make install
> 
> After that, I did a benchmark simulation using a box of pure water containing 
> 140k atoms. 
> The command I used for the above test is:
> srun --ntasks-per-node=32 --overcommit 
> /scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c 
> box_md1.gro -x box_md1.xtc -g md1.log >& job_md1
> 
> And I got the following performance:
> Num. cores   hour/ns
> 128   9.860
> 256  4.984
> 512  2.706
> 10241.544
> 20480.978
> 40920.677
> 
> The scaling seems ok, but the performance is far from what I expected. In 
> terms CPU-to-CPU performance, the Bluegene is 8 times slower than other 
> clusters. For comparison, I also did the same simulation using 64 processors 
> in a SGI cluster, and I got 2.8 hour/ns, which is roughly equivalent to using 
> 512 cores in BlueGene/Q. 
> 
> I am wondering if the above benchmark results are reasonable or not? Or Am I 
> doing something wrong in compiling?
> Any comments/suggestions are appreciated, thank you very much!
> 
> Have a nice day!
> Jianguo 
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q

2013-06-04 Thread Mark Abraham
On Tue, Jun 4, 2013 at 4:20 PM, XAvier Periole  wrote:

>
> BG CPUs are generally much slower (clock whose) but scale better.
>
> You should try to run on 64 CPUs on the Blue gene too for faire comparison.
> The number of CPUs per nodes is also an important factor: the more CPUs
> per nodes the more communications needs to be done. I observed a
> significant slow down while going from 16 to 32 CPUs nodes (recent intel)
> but using the same number of CPUs.
>

Indeed. Moreover, there is not (yet) any instruction-level parallelism in
the GROMACS kernels used on BG/Q, unlike for the x86 family. So there is a
theoretical factor of four that is simply not being exploited. (And no, the
compiler is not good enough to do it automatically ;-))

Mark


> On Jun 4, 2013, at 4:02 PM, Jianguo Li  wrote:
>
> > Dear All,
> >
> >
> > Has anyone has Gromacs benchmark on Bluegene/Q?
> > I recently installed gromacs-461 on BG/Q using the following command:
> > cmake .. -DCMAKE_TOOLCHAIN_FILE=BlueGeneQ-static-XL-C \
> >   -DGMX_BUILD_OWN_FFTW=ON \
> >  -DBUILD_SHARED_LIBS=OFF \
> >  -DGMX_XML=OFF \
> >  -DCMAKE_INSTALL_PREFIX=/scratch/home/biilijg/package/gromacs-461
> > make
> > make install
> >
> > After that, I did a benchmark simulation using a box of pure water
> containing 140k atoms.
> > The command I used for the above test is:
> > srun --ntasks-per-node=32 --overcommit
> /scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c
> box_md1.gro -x box_md1.xtc -g md1.log >& job_md1
> >
> > And I got the following performance:
> > Num. cores   hour/ns
> > 128   9.860
> > 256  4.984
> > 512  2.706
> > 10241.544
> > 20480.978
> > 40920.677
> >
> > The scaling seems ok, but the performance is far from what I expected.
> In terms CPU-to-CPU performance, the Bluegene is 8 times slower than other
> clusters. For comparison, I also did the same simulation using 64
> processors in a SGI cluster, and I got 2.8 hour/ns, which is roughly
> equivalent to using 512 cores in BlueGene/Q.
> >
> > I am wondering if the above benchmark results are reasonable or not? Or
> Am I doing something wrong in compiling?
> > Any comments/suggestions are appreciated, thank you very much!
> >
> > Have a nice day!
> > Jianguo
> >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q

2013-06-04 Thread Jianguo Li
Thank you XAvie. 
The thing is that the cluster manager set the minimum number of cores of each 
jobs in Bluegene/Q is 128, so I can not use 64 cores. But according to the 
performance, 512 cores in Bluegene roughly equivalent to 64 cores in another 
cluster. Since there are 16 cores in each computational cards, the total number 
of cores I used in Bluegene//Q is num_cards times 16. So in my test, I acutally 
run simulations using different number of cards, from 8 to 256. But each card  
I used 32 mpi tasks (since bluegene accepts up to 4 tasks each core). The 
following is the script I submitted to bluegene:

#!/bin/sh
#SBATCH --nodes=128
# set Use 128 Compute Cards ( 1x Compute Card = 16 cores, 128x16 = 2048 cores )
#SBATCH --job-name="128x16x2"
# set Job name
#SBATCH -output="first-job-sample"
# set Output file
#SBATCH --partition="training"
# set Job queue ( default is normal )

srun --ntasks-per-node=32 --overcommit 
/scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c 
box_md1.gro -x box_md1.xtc -g md1.log >& job_md1

Cheers 
Jianguo 






- Original Message -
From: XAvier Periole 
To: Jianguo Li ; Discussion list for GROMACS users 

Cc: 
Sent: Tuesday, 4 June 2013, 22:20
Subject: Re: [gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q


BG CPUs are generally much slower (clock whose) but scale better.

You should try to run on 64 CPUs on the Blue gene too for faire comparison. 
The number of CPUs per nodes is also an important factor: the more CPUs per 
nodes the more communications needs to be done. I observed a significant slow 
down while going from 16 to 32 CPUs nodes (recent intel) but using the same 
number of CPUs.

On Jun 4, 2013, at 4:02 PM, Jianguo Li  wrote:

> Dear All,
> 
> 
> Has anyone has Gromacs benchmark on Bluegene/Q? 
> I recently installed gromacs-461 on BG/Q using the following command:
> cmake .. -DCMAKE_TOOLCHAIN_FILE=BlueGeneQ-static-XL-C \
>           -DGMX_BUILD_OWN_FFTW=ON \
>          -DBUILD_SHARED_LIBS=OFF \
>          -DGMX_XML=OFF \
>          -DCMAKE_INSTALL_PREFIX=/scratch/home/biilijg/package/gromacs-461
> make
> make install
> 
> After that, I did a benchmark simulation using a box of pure water containing 
> 140k atoms. 
> The command I used for the above test is:
> srun --ntasks-per-node=32 --overcommit 
> /scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c 
> box_md1.gro -x box_md1.xtc -g md1.log >& job_md1
> 
> And I got the following performance:
> Num. cores       hour/ns
> 128           9.860
> 256          4.984
> 512          2.706
> 1024        1.544
> 2048        0.978
> 4092        0.677
> 
> The scaling seems ok, but the performance is far from what I expected. In 
> terms CPU-to-CPU performance, the Bluegene is 8 times slower than other 
> clusters. For comparison, I also did the same simulation using 64 processors 
> in a SGI cluster, and I got 2.8 hour/ns, which is roughly equivalent to using 
> 512 cores in BlueGene/Q. 
> 
> I am wondering if the above benchmark results are reasonable or not? Or Am I 
> doing something wrong in compiling?
> Any comments/suggestions are appreciated, thank you very much!
> 
> Have a nice day!
> Jianguo 
> 
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q

2013-06-04 Thread Jianguo Li


Thank you, Mark and Xavier.

The thing is that the cluster manager set the 
minimum number of cores of each jobs in Bluegene/Q is 128, so I can not 
use 64 cores. But according to the performance, 512 cores in Bluegene 
roughly equivalent to 64 cores in another cluster. Since there are 16 
cores in each computational cards, the total number of cores I used in 
Bluegene//Q is num_cards times 16. So in my test, I acutally run 
simulations using different number of cards, from 8 to 256. 

The following is the script I submitted to bluegene using 128 computational 
cards:

#!/bin/sh
#SBATCH --nodes=128
# set Use 128 Compute Cards ( 1x Compute Card = 16 cores, 128x16 = 2048 cores )
#SBATCH --job-name="128x16x2"
# set Job name
#SBATCH -output="first-job-sample"
# set
Output file
#SBATCH --partition="training"


srun
--ntasks-per-node=32 --overcommit 
/scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c 
box_md1.gro -x box_md1.xtc -g md1.log >& job_md1

Since bluegene/q accepts up to 4 tasks each 
core, I used 32 mpi tasks for each card (2 task per core). I tried 
--ntasks-per-node=64, but the simulations get much slower.
Is there a optimized number for --ntasks-per-node?

Cheers 
Jianguo 





From: Mark Abraham 
To: Discussion list for GROMACS users  
Cc: Jianguo Li  
Sent: Tuesday, 4 June 2013, 22:32
Subject: Re: [gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q








On Tue, Jun 4, 2013 at 4:20 PM, XAvier Periole  wrote:


>BG CPUs are generally much slower (clock whose) but scale better.
>
>You should try to run on 64 CPUs on the Blue gene too for faire comparison.
>The number of CPUs per nodes is also an important factor: the more CPUs per 
>nodes the more communications needs to be done. I observed a significant slow 
>down while going from 16 to 32 CPUs nodes (recent intel) but using the same 
>number of CPUs.
>

Indeed. Moreover, there is not (yet) any instruction-level parallelism in the 
GROMACS kernels used on BG/Q, unlike for the x86 family. So there is a 
theoretical factor of four that is simply not being exploited. (And no, the 
compiler is not good enough to do it automatically ;-))

Mark


>On Jun 4, 2013, at 4:02 PM, Jianguo Li  wrote:
>
>> Dear All,
>>
>>
>> Has anyone has Gromacs benchmark on Bluegene/Q?
>> I recently installed gromacs-461 on BG/Q using the following command:
>> cmake .. -DCMAKE_TOOLCHAIN_FILE=BlueGeneQ-static-XL-C \
>>           -DGMX_BUILD_OWN_FFTW=ON \
>>          -DBUILD_SHARED_LIBS=OFF \
>>          -DGMX_XML=OFF \
>>          -DCMAKE_INSTALL_PREFIX=/scratch/home/biilijg/package/gromacs-461
>> make
>> make install
>>
>> After that, I did a benchmark simulation using a box of pure water 
>> containing 140k atoms.
>> The command I used for the above test is:
>> srun --ntasks-per-node=32 --overcommit 
>> /scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c 
>> box_md1.gro -x box_md1.xtc -g md1.log >& job_md1
>>
>> And I got the following performance:
>> Num. cores       hour/ns
>> 128           9.860
>> 256          4.984
>> 512          2.706
>> 1024        1.544
>> 2048        0.978
>> 4092        0.677
>>
>> The scaling seems ok, but the performance is far from what I expected. In 
>> terms CPU-to-CPU performance, the Bluegene is 8 times slower than other 
>> clusters. For comparison, I also did the same simulation using 64 processors 
>> in a SGI cluster, and I got 2.8 hour/ns, which is roughly equivalent to 
>> using 512 cores in BlueGene/Q.
>>
>> I am wondering if the above benchmark results are reasonable or not? Or Am I 
>> doing something wrong in compiling?
>> Any comments/suggestions are appreciated, thank you very much!
>>
>> Have a nice day!
>> Jianguo
>>
>> --
>> gmx-users mailing list    gmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>--
>gmx-users mailing list    gmx-users@gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>* Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>* Please don't post (un)subscribe requests to the list. Use the
>www interface or send it to gmx-users-requ...@gromacs.org.
>* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Can we connect two boxes together for the simulation?

2013-06-04 Thread Justin Lemkul



On 6/4/13 8:46 AM, Bao Kai wrote:

Hi, all,

I want to do NPT simulations with different compositions first. Then I want
to connect the two boxes to continue the NPT simulation.

I mean, after simulations, we get two boxes with different compositions.

Can we do that with gromacs? or how can we do that?



One possible approach:

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/biphasic/index.html

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to add sodium acetate

2013-06-04 Thread Justin Lemkul



On 6/4/13 10:05 AM, maggin wrote:

Hi, when we add Nacl, can use -pname NA -nname cl in GMX

so for sodium acetate, how to display it ?



Use genbox -ci -nmol to add the acetate molecules, then genion to add Na+ ions.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Can we connect two boxes together for the simulation?

2013-06-04 Thread Justin Lemkul



On 6/4/13 10:12 AM, Bao Kai wrote:

Hi,


I guess the renumbering of the atoms and molecules will be a problem,
especially when the two boxes contain the same type of the molecules.


How can we handle that?



genconf -renumber

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About coulmb & Vanderrwalls cutoff

2013-06-04 Thread vidhya sankar
Dear Justin Thank you for yoyr Previuos reply
    I am using  
Gromos96 53a6  

so i am using  the following parameters
ns_type = grid   
nstlist = 5   
rlist   = 0.9 
rcoulomb    = 0.9  
rvdw    = 1.4  

is this value is reasonable   ?
But when i run the grommp i got warnings as follows
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, setting
  nstcomm to nstcalcenergy

NOTE 2 [file CNTPEPnvt.mdp]:
  You are using a cut-off for VdW interactions with NVE, for good energy
  conservation use vdwtype = Shift (possibly with DispCorr)

NOTE 3 [file CNTPEPnvt.mdp]:
  You are using a cut-off for electrostatics with NVE, for good energy
  conservation use coulombtype = PME-Switch or Reaction-Field-zero

 if Chnge vdwtupe=shift with   disspCorr  =  DispCorr    = EnerPres 

I have got the  error as follows 
WARNING 2 [file CNTPEPnvt.mdp]:
  The sum of the two largest charge group radii (0.498722) is larger than
  rlist (0.90) - rvdw (1.40)
How to to Avoid this error? (Due to warning grompp_d terminated )

Thanks In Advance
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About coulmb & Vanderrwalls cutoff

2013-06-04 Thread Justin Lemkul



On 6/4/13 11:25 AM, vidhya sankar wrote:

Dear Justin Thank you for yoyr Previuos reply
 I am using  
Gromos96 53a6

so i am using  the following parameters
ns_type = grid
nstlist = 5
rlist   = 0.9
rcoulomb= 0.9
rvdw= 1.4

is this value is reasonable   ?


Yes.


But when i run the grommp i got warnings as follows
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, setting
   nstcomm to nstcalcenergy

NOTE 2 [file CNTPEPnvt.mdp]:
   You are using a cut-off for VdW interactions with NVE, for good energy
   conservation use vdwtype = Shift (possibly with DispCorr)

NOTE 3 [file CNTPEPnvt.mdp]:
   You are using a cut-off for electrostatics with NVE, for good energy
   conservation use coulombtype = PME-Switch or Reaction-Field-zero

  if Chnge vdwtupe=shift with   disspCorr  =  DispCorr= EnerPres

I have got the  error as follows
WARNING 2 [file CNTPEPnvt.mdp]:
   The sum of the two largest charge group radii (0.498722) is larger than
   rlist (0.90) - rvdw (1.40)
How to to Avoid this error? (Due to warning grompp_d terminated )



The notes and warnings arise from settings that are still incorrect.  Gromos96 
force fields do not use shifted potentials for vdW interactions; they are simply 
truncated at rvdw using a plain cutoff.  The plain cutoff for Coulombic 
interactions is also wrong.  The original Gromos96 development was done with a 
reaction field term, but more recent work has shown PME to be superior.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q

2013-06-04 Thread Mark Abraham
On Tue, Jun 4, 2013 at 4:50 PM, Jianguo Li  wrote:

>
>
> Thank you, Mark and Xavier.
>
> The thing is that the cluster manager set the
> minimum number of cores of each jobs in Bluegene/Q is 128, so I can not
> use 64 cores. But according to the performance, 512 cores in Bluegene
> roughly equivalent to 64 cores in another cluster. Since there are 16
> cores in each computational cards, the total number of cores I used in
> Bluegene//Q is num_cards times 16. So in my test, I acutally run
> simulations using different number of cards, from 8 to 256.
>
> The following is the script I submitted to bluegene using 128
> computational cards:
>
> #!/bin/sh
> #SBATCH --nodes=128
> # set Use 128 Compute Cards ( 1x Compute Card = 16 cores, 128x16 = 2048
> cores )
> #SBATCH --job-name="128x16x2"
> # set Job name
> #SBATCH -output="first-job-sample"
> # set
> Output file
> #SBATCH --partition="training"
>
>
> srun
> --ntasks-per-node=32 --overcommit
> /scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c
> box_md1.gro -x box_md1.xtc -g md1.log >& job_md1
>
> Since bluegene/q accepts up to 4 tasks each
> core, I used 32 mpi tasks for each card (2 task per core). I tried
> --ntasks-per-node=64, but the simulations get much slower.
> Is there a optimized number for --ntasks-per-node?
>

The threads per core thing will surely be useless for GROMACS. Even our
unoptimized kernels will saturate the available flops. There simply is
nothing to overlap, so you net lose from the extra overhead. You should aim
at 16 threads per node, one for each A2 core. Each of those 16 need not be
an MPI process, however.

There's some general background info here
http://www.gromacs.org/Documentation/Acceleration_and_parallelization.
Relevant to BG/Q is that you will be using real MPI and should use OpenMP
and the Verlet kernels (see
http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Multi-level_parallelization.3a_MPI.2fthread-MPI_.2b_OpenMP).
Finding the right balance of OpenMP threads per MPI process is hardware-
and problem-dependent, so you will need to experiment there.

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: gmx 4.6 mpi installation through openmpi?

2013-06-04 Thread Mark Abraham
Compile and install your own FFTW per the install guide? At least that
eliminates a variable.

Mark


On Tue, Jun 4, 2013 at 2:53 PM, escajarro  wrote:

> I received this answer:
>
> Mark Abraham Mon, 03 Jun 2013 08:02:56 -0700
>
> That looks like there's a PGI compiler getting used at some point (perhaps
> the internal FFTW build is picking up the CC environment var? I forget how
> that gets its compiler!). If you do find * -name config.log then perhaps
> you can see in that file what the FFTW build thinks about its compiler.
>
> Mark
>
> I found the config.log file and checked that the compiler being used to
> compile the internal FFTW is the same version of gcc that is used to
> compile
> the rest of Gromacs. In fact, I set the environment variable CC to gcc. I
> also tried to clean and compile from scratch the FFTW, but no success.
>
> I also tried to specify with the environment variable CMAKE_LIBRARY_PATH
> where the FFTW library is (in my case in
> gromacs-4.6/src/contrib/fftw/gmxfftw-prefix/lib), but I obtained the same
> error.
>
> Any other idea?
>
> Thanks
>
>
>
> --
> View this message in context:
> http://gromacs.5086.x6.nabble.com/gmx-4-6-mpi-installation-through-openmpi-tp5006975p5008782.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] pullx file content with pull_geometry = position

2013-06-04 Thread Bastien Loubet
Dear gromacs user,

I run a simulation where I restrain two groups in a specific position with
respect to one another: one is part of a protein (index group name:
pull_group) the other is just one ion (index pull_group name r_60022).
I have the following parameters in my mdp file:

*
; Pull code
pull= umbrella
pull_geometry   = position   ;
pull_dim= Y Y Y
pull_start  = no
pull_ngroups= 1
pull_group0 = pull_group
pull_group1 = r_60022
pull_init1  = -.4110 1.0700 1.7760
pull_rate1  = 0.0
pull_vec1   = -0.264 0.315 0.912
pull_k1 = 1000  ; kJ mol^-1 nm^-2
pull_nstxout= 100  ; every 0.2 ps
pull_nstfout= 100  ; every 0.2 ps
**

So the ion should be restrained around the position pull_init1 with respect
to the center of mass of the protein.
Here is a sample of the obtained pullx.xvg file:

**
@title "Pull COM"
@xaxis  label "Time (ps)"
@yaxis  label "Position (nm)"
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "0 X"
@ s1 legend "0 Y"
@ s2 legend "0 Z"
@ s3 legend "1 dX"
@ s4 legend "1 dY"
@ s5 legend "1 dZ"
0.  3.43102 5.2156  11.5301 -0.531018   1.0244  2.13988
0.2000  3.43096 5.21543 11.53   -0.544007   1.02157 2.14952
0.4000  3.43094 5.2146  11.5304 -0.551138   1.00177 2.12238
0.6000  3.43074 5.21354 11.5312 -0.559285   1.01417 2.13649
(...)
20. 3.45331 5.22215 11.51   -0.646908   0.9390662.11768
(...)
40. 3.45141 5.21115 11.5685 -0.689742   0.9789082.14589
(...)
*

As far as I understood the manual and after searching the forum I though
that the first column is the time, the next three columns the position of
the pulled group (r_60022) at the time and the last three columns the
distance in the x y z direction between the reference group (pull_group, bad
name choice I know) and the pulled group (r_60022). If that was true it
would suggest that the ion is pulled more than three A away from the center
of the umbrella potential.
After checking the trajectory visually and by calculating the distance
between pull_group and  r_60022 using g_dist on the trajectory we can see it
is not the case. Here is a sample of the g_dist output:


@title "Distance"
@xaxis  label "Time (ps)"
@yaxis  label "Distance (nm)"
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "|d|"
@ s1 legend "d\sx\N"
@ s2 legend "d\sy\N"
@ s3 legend "d\sz\N"
   0.0002.1085827   -0.37829451.13508751.7362576
  20.0002.0527425   -0.44645331.07192131.6927538
  40.0002.1132109   -0.34997841.12395621.7549639
(...)
***

Which oscillate correctly around the pull_init1 vector as defined in the mdp
file.

More importantly these results are different from the results in the
pullx.xvg file (at least the results I expected to have).

Can somebody clarify what is actually in the pullx.xvg file ?

Best,

Bastien Loubet



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/pullx-file-content-with-pull-geometry-position-tp5008797.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] ngmx not installed in gmx4.6.1

2013-06-04 Thread Chandan Choudhury
Dear gmx users,

I had installed gromacs 4.6.1 using cmake. All the binaries are installed,
but surprisingly I do not find ngmx executable. Can anyone guide me how do
I install ngmx using cmake.

Chandan
--
Chandan kumar Choudhury
NCL, Pune
INDIA
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] ngmx not installed in gmx4.6.1

2013-06-04 Thread David van der Spoel

On 2013-06-04 17:55, Chandan Choudhury wrote:

Dear gmx users,

I had installed gromacs 4.6.1 using cmake. All the binaries are installed,
but surprisingly I do not find ngmx executable. Can anyone guide me how do
I install ngmx using cmake.


cmake -DGMX_X11=ON


Chandan
--
Chandan kumar Choudhury
NCL, Pune
INDIA




--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-04 Thread Mark Abraham
Yes, documentation and output is not optimal. Resources are limited, sorry.
Some of these issues are discussed in
http://bugzilla.gromacs.org/issues/1135. The good news is that it sounds
like you are having a non-problem. The output tacitly assumes
heterogeneity. If your performance results are linear over small numbers of
nodes, then you're doing fine.

Mark

On Tue, Jun 4, 2013 at 3:31 PM, João Henriques <
joao.henriques.32...@gmail.com> wrote:

> Dear all,
>
> Since gmx-4.6 came out, I've been particularly interested in taking
> advantage of the native GPU acceleration for my simulations. Luckily, I
> have access to a cluster with the following specs PER NODE:
>
> CPU
> 2 E5-2650 (2.0 Ghz, 8-core)
>
> GPU
> 2 Nvidia K20
>
> I've become quite familiar with the "heterogenous parallelization" and
> "multiple MPI ranks per GPU" schemes on a SINGLE NODE. Everything works
> fine, no problems at all.
>
> Currently, I'm working with a nasty system comprising 608159 tip3p water
> molecules and it would really help to accelerate things up a bit.
> Therefore, I would really like to try to parallelize my system over
> multiple nodes and keep the GPU acceleration.
>
> I've tried many different command combinations, but mdrun seems to be blind
> towards the GPUs existing on other nodes. It always finds GPUs #0 and #1 on
> the first node and tries to fit everything into these, completely
> disregarding the existence of the other GPUs on the remaining requested
> nodes.
>
> Once again, note that all nodes have exactly the same specs.
>
> Literature on the official gmx website is not, well... you know... in-depth
> and I would really appreciate if someone could shed some light into this
> subject.
>
> Thank you,
> Best regards,
>
> --
> João Henriques
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] pdb2gmx

2013-06-04 Thread Valentina Erastova
Hello all, 

I am converting a .pdb into .gro and .top and assigning ClayFF forcefield, that 
is not included in the gromacs.

It's a strange FF as doesn't have ones between all of the atoms, just O-H but 
it has an angle between a metal and OH, therefore I wore out molecule.rtp (see 
below) that included the angles, BUT wen I convert to the .top I only get 
bonds, but not angles (see below). Is there something I am missing or does 
gromacs not assign angles if no bonds are assigned?

Thank you!



molecule.rtp
[ B31 ] ;# ldh21 
  [ atoms ]
 ; atomname atomtype charge chargegroup
; charges were taken from paper by Kuang Wu and J R Schmidt; J Phys Chem C 
(2012)
; and edited to make it fir LDH31 model and produce +1
   o1  ohs  -1.113   0
   h1  ho 0.464   0
   o2  ohs  -1.113   0
   h2  ho 0.464   0
   o3  ohs  -1.113   0
   h3  ho 0.464   0
   o4  ohs  -1.113   0
   h4  ho 0.464   0
   mg1 mgo   1.403   0
   al1 ao1.983   0
   o5  ohs  -1.113   0
   h5  ho 0.464   0
   o6  ohs  -1.113   0
   h6  ho 0.464   0
   o7  ohs  -1.113   0
   h7  ho 0.464   0
   o8  ohs   -1.113   0
   h8  ho 0.464   0
   mg2 mgo   1.403   0
   mg3 mgo   1.403   0
  [ bonds ]
 ; atom1 atom2   parametersindex
   o1 h1 ohs 
   o2 h2 ohs 
   o3 h3 ohs 
   o4 h4 ohs 
   o5 h5 ohs 
   o6 h6 ohs 
   o7 h7 ohs 
   o8 h8 ohs  
  [ angles ]
 ;  aiajak   gromos type 
mg1   o1   h1   moh
mg1   o8   h8   moh
mg1   o4   h4   moh
mg1   o6   h6   moh
mg1   o3   h3   moh
mg1   o5   h5   moh
al1   o3   h3   moh
al1   o5   h5   moh
al1   o2   h2   moh
al1   o7   h7   moh
al1   o4   h4   moh
al1   o6   h6   moh
mg2   o8   h8   moh
mg2   o2   h2   moh
mg2   o5   h5   moh
mg2   o4   h4   moh
mg2   o7   h7   moh
mg2   o1   h1   moh
mg3   o7   h7   moh
mg3   o1   h1   moh
mg3   o6   h6   moh
mg3   o3   h3   moh
mg3   o8   h8   moh
mg3   o2   h2   moh



ldh31.top
; Include forcefield parameters
#include "./ClayFF.ff/forcefield.itp"

[ moleculetype ]
; Namenrexcl
Other   3

[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass  typeB
chargeB  massB
; residue   1 B31 rtp B31  q +1.0
 1ohs  1B31 o1  1 -1.113 16   ; qtot 
-1.113
 2 ho  1B31 h1  1  0.464  1.008   ; qtot 
-0.649
 3ohs  1B31 o2  1 -1.113 16   ; qtot 
-1.762
 4 ho  1B31 h2  1  0.464  1.008   ; qtot 
-1.298
 5ohs  1B31 o3  1 -1.113 16   ; qtot 
-2.411
 6 ho  1B31 h3  1  0.464  1.008   ; qtot 
-1.947
 7ohs  1B31 o4  1 -1.113 16   ; qtot 
-3.06
 8 ho  1B31 h4  1  0.464  1.008   ; qtot 
-2.596
 9mgo  1B31mg1  1  1.403  24.31   ; qtot 
-1.193
10 ao  1B31al1  1  1.983  26.98   ; qtot 
0.79
11ohs  1B31 o5  1 -1.113 16   ; qtot 
-0.323
12 ho  1B31 h5  1  0.464  1.008   ; qtot 
0.141
13ohs  1B31 o6  1 -1.113 16   ; qtot 
-0.972
14 ho  1B31 h6  1  0.464  1.008   ; qtot 
-0.508
15ohs  1B31 o7  1 -1.113 16   ; qtot 
-1.621
16 ho  1B31 h7  1  0.464  1.008   ; qtot 
-1.157
17ohs  1B31 o8  1 -1.113 16   ; qtot 
-2.27
18 ho  1B31 h8  1  0.464  1.008   ; qtot 
-1.806
19mgo  1B31mg2  1  1.403  24.31   ; qtot 
-0.403
20mgo  1B31mg3  1  1.403  24.31   ; qtot 1

[ bonds ]
;  aiaj functc0c1c2c3
1 2 1ohs
3 4 1ohs
5 6 1ohs
7 8 1ohs
   1112 1ohs
   1314 1ohs
   1516 1ohs
   1718 1ohs

; Include Position restraint file
#ifdef POSRES
#include "posre.itp"
#endif

[ system ]
; Name
God Rules Over Mankind, Animals, Cosmos and Such

[ molecules ]
; Compound#mols
Other   1--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] ngmx not installed in gmx4.6.1

2013-06-04 Thread Mark Abraham
This has been default behaviour for years. See pre-conditions here
http://www.gromacs.org/Documentation/Installation_Instructions#.c2.a7_3.5._Optional_build_components.
 David's email has one way to solve the problem.

Mark


On Tue, Jun 4, 2013 at 5:55 PM, Chandan Choudhury  wrote:

> Dear gmx users,
>
> I had installed gromacs 4.6.1 using cmake. All the binaries are installed,
> but surprisingly I do not find ngmx executable. Can anyone guide me how do
> I install ngmx using cmake.
>
> Chandan
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] High Initial generated Temp

2013-06-04 Thread tarak karmakar
Thanks Justin.
Sorry for not uploading the full .mdp. Here it is,

; 7.3.3 Run Control
integrator  = md
tinit   = 0
dt  = 0.001
nsteps  = 500
nstcomm = 1
comm_grps   = system
comm_mode   = linear
energygrps  = system

; 7.3.8 Output Control
nstxout = 5000
nstfout = 5000
nstlog  = 1000
nstenergy   = 1000
nstxtcout   = 1000
xtc_precision   = 1000
xtc_grps= System

; 7.3.9 Neighbor Searching
nstlist = 10
ns_type = grid
pbc = xyz
rlist   = 1.2
rlistlong   = 1.4

; 7.3.10 Electrostatics
coulombtype = PME
rcoulomb= 1.2
fourierspacing  = 0.12
pme_order   = 4
ewald_rtol  = 1e-5

; 7.3.11 VdW
vdwtype = switch
rvdw= 1.2
rvdw-switch = 1.0

DispCorr= Ener


; 7.3.14 Temperature Coupling
tcoupl  = nose-hoover
tc_grps = system
tau_t   = 1.0
ref_t   = 300

; 7.3.15 Pressure Coupling
pcoupl  = parrinello-rahman
pcoupltype  = isotropic
tau_p   = 1.0
compressibility = 4.5e-5
ref_p   = 1.0

gen_vel = yes
gen_temp= 300
gen_seed= 93873959697

; 7.3.18 Bonds
constraints = h-bonds
constraint_algorithm= LINCS
continuation= yes
lincs_order = 4
lincs_iter  = 1
lincs_warnangle = 30

Note: Using CHARMM27 force field

I didn't use the 'continuation' part here.
In the heating run I didn't put any constraints but in the production MD, I
do apply constraints to the covalent bonds involving hydrogens. I just want
to test the ligand movement inside the protein cavity in different set of
initial velocities to get the feelings of how ligand is interacting with
certain residues.
So, then should I use these different velocity generating seeds during the
warm up step?

Thanks,
Tarak







On Tue, Jun 4, 2013 at 5:46 PM, Justin Lemkul  wrote:

>
>
> On 6/4/13 8:07 AM, tarak karmakar wrote:
>
>> Dear All,
>>
>> Although I have set gen_temp = 300, it is showing the initial temperature
>> 445.7 K, generated at the very beginning  of the run.
>>
>> gen_vel = yes ; velocity generation
>> gen_temp= 300
>> gen_seed= 93873959697
>>
>> Is it because of a bad geometry? In mailing list I came across this thread
>> [ 
>> http://comments.gmane.org/**gmane.science.biology.gromacs.**user/41931
>> ]
>> Prior to this production run, I heated my system slowly from 0 K to 300 K
>> within 300 ps time span.
>> It would be very helpful if someone suggests me the way to deal with this
>> problem?
>>
>>
> A full .mdp file is always more useful than a small snippet.  I have seen
> this same behavior when the "continuation" parameter is incorrectly set -
> have you used "continuation = yes" in your .mdp file?  If not, the
> constraints get messed up and your initial velocities can get all messed up.
>
> A larger point is this - why are you re-generating velocities after
> heating the system from 0 to 300 K?  Why not simply preserve the ensemble
> at 300 K by using a .cpt file?
>
> -Justin
>
> --
> ==**==
>
> Justin A. Lemkul, Ph.D.
> Research Scientist
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin
>
> ==**==
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Searchbefore
>  posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read 
> http://www.gromacs.org/**Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] High Initial generated Temp

2013-06-04 Thread tarak karmakar
I'm extremely sorry for copying the other '.mdp' file here. This is the
modified one I just created after seeing your reply. In the previous case I
didn't use 'continuation'.


On Tue, Jun 4, 2013 at 9:47 PM, tarak karmakar  wrote:

> Thanks Justin.
> Sorry for not uploading the full .mdp. Here it is,
>
> ; 7.3.3 Run Control
> integrator  = md
> tinit   = 0
> dt  = 0.001
> nsteps  = 500
> nstcomm = 1
> comm_grps   = system
> comm_mode   = linear
> energygrps  = system
>
> ; 7.3.8 Output Control
> nstxout = 5000
> nstfout = 5000
> nstlog  = 1000
> nstenergy   = 1000
> nstxtcout   = 1000
> xtc_precision   = 1000
> xtc_grps= System
>
> ; 7.3.9 Neighbor Searching
> nstlist = 10
> ns_type = grid
> pbc = xyz
> rlist   = 1.2
> rlistlong   = 1.4
>
> ; 7.3.10 Electrostatics
> coulombtype = PME
> rcoulomb= 1.2
> fourierspacing  = 0.12
> pme_order   = 4
> ewald_rtol  = 1e-5
>
> ; 7.3.11 VdW
> vdwtype = switch
> rvdw= 1.2
> rvdw-switch = 1.0
>
> DispCorr= Ener
>
>
> ; 7.3.14 Temperature Coupling
> tcoupl  = nose-hoover
> tc_grps = system
> tau_t   = 1.0
> ref_t   = 300
>
> ; 7.3.15 Pressure Coupling
> pcoupl  = parrinello-rahman
> pcoupltype  = isotropic
> tau_p   = 1.0
> compressibility = 4.5e-5
> ref_p   = 1.0
>
> gen_vel = yes
> gen_temp= 300
> gen_seed= 93873959697
>
> ; 7.3.18 Bonds
> constraints = h-bonds
> constraint_algorithm= LINCS
> continuation= yes
> lincs_order = 4
> lincs_iter  = 1
> lincs_warnangle = 30
>
> Note: Using CHARMM27 force field
>
> I didn't use the 'continuation' part here.
> In the heating run I didn't put any constraints but in the production MD,
> I do apply constraints to the covalent bonds involving hydrogens. I just
> want to test the ligand movement inside the protein cavity in different set
> of initial velocities to get the feelings of how ligand is interacting with
> certain residues.
> So, then should I use these different velocity generating seeds during the
> warm up step?
>
> Thanks,
> Tarak
>
>
>
>
>
>
>
> On Tue, Jun 4, 2013 at 5:46 PM, Justin Lemkul  wrote:
>
>>
>>
>> On 6/4/13 8:07 AM, tarak karmakar wrote:
>>
>>> Dear All,
>>>
>>> Although I have set gen_temp = 300, it is showing the initial temperature
>>> 445.7 K, generated at the very beginning  of the run.
>>>
>>> gen_vel = yes ; velocity generation
>>> gen_temp= 300
>>> gen_seed= 93873959697
>>>
>>> Is it because of a bad geometry? In mailing list I came across this
>>> thread
>>> [ 
>>> http://comments.gmane.org/**gmane.science.biology.gromacs.**user/41931
>>> ]
>>> Prior to this production run, I heated my system slowly from 0 K to 300 K
>>> within 300 ps time span.
>>> It would be very helpful if someone suggests me the way to deal with this
>>> problem?
>>>
>>>
>> A full .mdp file is always more useful than a small snippet.  I have seen
>> this same behavior when the "continuation" parameter is incorrectly set -
>> have you used "continuation = yes" in your .mdp file?  If not, the
>> constraints get messed up and your initial velocities can get all messed up.
>>
>> A larger point is this - why are you re-generating velocities after
>> heating the system from 0 to 300 K?  Why not simply preserve the ensemble
>> at 300 K by using a .cpt file?
>>
>> -Justin
>>
>> --
>> ==**==
>>
>> Justin A. Lemkul, Ph.D.
>> Research Scientist
>> Department of Biochemistry
>> Virginia Tech
>> Blacksburg, VA
>> jalemkul[at]vt.edu | (540) 231-9080
>> http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin
>>
>> ==**==
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/**mailman/listinfo/gmx-users
>> * Please search the archive at http://www.gromacs.org/**
>> Support/Mailing_Lists/Searchbefore
>>  posting!
>> * Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read 
>> http://www.gromacs.org/**Support/Mailing_Lists
>>
>
>
>
>
-- 
gmx-users mailing listgmx-us

Re: [gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-04 Thread Szilárd Páll
mdrun is not blind, just the current design does report the hardware
of all compute nodes used. Whatever CPU/GPU hardware mdrun reports in
the log/std output is *only* what rank 0, i.e. the first MPI process,
detects. If you have a heterogeneous hardware configuration, in most
cases you should be able to run just fine, but you'll still get only
the hardware the first rank sits on reported.

Hence, if you want to run on 5 of the nodes you mention, you just do:
mpirun -np 10 mdrun_mpi [-gpu_id 01]

You may want to try both -ntomp 8 and -ntomp 16 (using HyperThreading
does not always help).

Also note that if you use GPU sharing among ranks (in order to use <8
threads/rank), (for some technical reasons) disabling dynamic load
balancing may help - especially if you have a homogenous simulation
system (and hardware setup).


Cheers,
--
Szilárd


On Tue, Jun 4, 2013 at 3:31 PM, João Henriques
 wrote:
> Dear all,
>
> Since gmx-4.6 came out, I've been particularly interested in taking
> advantage of the native GPU acceleration for my simulations. Luckily, I
> have access to a cluster with the following specs PER NODE:
>
> CPU
> 2 E5-2650 (2.0 Ghz, 8-core)
>
> GPU
> 2 Nvidia K20
>
> I've become quite familiar with the "heterogenous parallelization" and
> "multiple MPI ranks per GPU" schemes on a SINGLE NODE. Everything works
> fine, no problems at all.
>
> Currently, I'm working with a nasty system comprising 608159 tip3p water
> molecules and it would really help to accelerate things up a bit.
> Therefore, I would really like to try to parallelize my system over
> multiple nodes and keep the GPU acceleration.
>
> I've tried many different command combinations, but mdrun seems to be blind
> towards the GPUs existing on other nodes. It always finds GPUs #0 and #1 on
> the first node and tries to fit everything into these, completely
> disregarding the existence of the other GPUs on the remaining requested
> nodes.
>
> Once again, note that all nodes have exactly the same specs.
>
> Literature on the official gmx website is not, well... you know... in-depth
> and I would really appreciate if someone could shed some light into this
> subject.
>
> Thank you,
> Best regards,
>
> --
> João Henriques
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] High Initial generated Temp

2013-06-04 Thread Justin Lemkul



On 6/4/13 12:17 PM, tarak karmakar wrote:

Thanks Justin.
Sorry for not uploading the full .mdp. Here it is,

; 7.3.3 Run Control
integrator  = md
tinit   = 0
dt  = 0.001
nsteps  = 500
nstcomm = 1
comm_grps   = system
comm_mode   = linear
energygrps  = system

; 7.3.8 Output Control
nstxout = 5000
nstfout = 5000
nstlog  = 1000
nstenergy   = 1000
nstxtcout   = 1000
xtc_precision   = 1000
xtc_grps= System

; 7.3.9 Neighbor Searching
nstlist = 10
ns_type = grid
pbc = xyz
rlist   = 1.2
rlistlong   = 1.4

; 7.3.10 Electrostatics
coulombtype = PME
rcoulomb= 1.2
fourierspacing  = 0.12
pme_order   = 4
ewald_rtol  = 1e-5

; 7.3.11 VdW
vdwtype = switch
rvdw= 1.2
rvdw-switch = 1.0

DispCorr= Ener


; 7.3.14 Temperature Coupling
tcoupl  = nose-hoover
tc_grps = system
tau_t   = 1.0
ref_t   = 300

; 7.3.15 Pressure Coupling
pcoupl  = parrinello-rahman
pcoupltype  = isotropic
tau_p   = 1.0
compressibility = 4.5e-5
ref_p   = 1.0

gen_vel = yes
gen_temp= 300
gen_seed= 93873959697

; 7.3.18 Bonds
constraints = h-bonds
constraint_algorithm= LINCS
continuation= yes
lincs_order = 4
lincs_iter  = 1
lincs_warnangle = 30

Note: Using CHARMM27 force field

I didn't use the 'continuation' part here.
In the heating run I didn't put any constraints but in the production MD, I
do apply constraints to the covalent bonds involving hydrogens. I just want


The introduction of constraints explains the observed behavior.  You ran an 
unconstrained simulation, then at step 0 of the constrained simulation, the 
constraints have to be satisfied, which may introduce sudden movement in atomic 
positions, hence large velocities and a high temperature.  The rule of thumb I 
always use - if you're going to use constraints during production simulations, 
use constraints during equilibration.  I have seen several instances where 
unconstrained equilibration causes constrained simulations to later fail.



to test the ligand movement inside the protein cavity in different set of
initial velocities to get the feelings of how ligand is interacting with
certain residues.
So, then should I use these different velocity generating seeds during the
warm up step?



That's an interesting question.  If you're warming from 0 -> 300 K, I don't know 
how grompp will generate velocities at 0 K, but regenerating velocities after 
warming seems to defeat the purpose of doing the warming at all, in my mind, 
since you're just going to re-randomize the entire system by doing so.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU problem

2013-06-04 Thread Albert

On 06/04/2013 11:22 AM, Chandan Choudhury wrote:

Hi Albert,

I think using -nt flag (-nt=16) with mdrun would solve your problem.

Chandan



thank you so much.

it works well now.

ALBERT
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Can we connect two boxes together for the simulation?

2013-06-04 Thread Dr. Vitaly Chaban
Nohow.

Numbers in GRO files serve exclusively decorative function.


Dr. Vitaly Chaban





On Tue, Jun 4, 2013 at 4:12 PM, Bao Kai  wrote:

> Hi,
>
>
> I guess the renumbering of the atoms and molecules will be a problem,
> especially when the two boxes contain the same type of the molecules.
>
>
> How can we handle that?
>
>
> Thanks.
>
>
> Best,
>
> Kai
>
>
>
>
> editconf is a nice tool to create vacuum in your box. You can then insert
> one of your box into another box using cat box1.gro box2.gro, just remove
> the very last line in box1.gro.
>
> Dr. Vitaly Chaban
>
>
>
>
>
>
>
>
>
> On Tue, Jun 4, 2013 at 2:46 PM, Bao Kai  > wrote:
>
> >* Hi, all,*>**>* I want to do NPT simulations with different compositions
> first. Then I want*>* to connect the two boxes to continue the NPT
> simulation.*>**>* I mean, after simulations, we get two boxes with
> different compositions.*>**>* Can we do that with gromacs? or how can we do
> that?*>**>* Thanks.*>**>* Best,*>* Kai*>* --*>* gmx-users mailing list
>  gmx-users at gromacs.org <
> http://lists.gromacs.org/mailman/listinfo/gmx-users>*>*
> http://lists.gromacs.org/mailman/listinfo/gmx-users*>* * Please search
> the archive at*>* http://www.gromacs.org/Support/Mailing_Lists/Searchbefore 
> posting!*>* * Please don't post (un)subscribe requests to the list.
> Use the*>* www interface or send it to gmx-users-request at gromacs.org. <
> http://lists.gromacs.org/mailman/listinfo/gmx-users>*>* * Can't post?
> Read http://www.gromacs.org/Support/Mailing_Lists*
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] High Initial generated Temp

2013-06-04 Thread tarak karmakar
Yeah!
It is indeed a silly point to generate a velocity distribution at 0 K. (
Maxwell-Boltzmann will be in trouble)
After the warm up, now let say my protein is in 300 K, can't I generate a
velocity distribution at 300 k (using the keyword gen_vel = yes, gen_temp =
300 K gen_seed = 173529 ) during my production run?

Thanks,
Tarak


On Tue, Jun 4, 2013 at 10:10 PM, Justin Lemkul  wrote:

>
>
> On 6/4/13 12:17 PM, tarak karmakar wrote:
>
>> Thanks Justin.
>> Sorry for not uploading the full .mdp. Here it is,
>>
>> ; 7.3.3 Run Control
>> integrator  = md
>> tinit   = 0
>> dt  = 0.001
>> nsteps  = 500
>> nstcomm = 1
>> comm_grps   = system
>> comm_mode   = linear
>> energygrps  = system
>>
>> ; 7.3.8 Output Control
>> nstxout = 5000
>> nstfout = 5000
>> nstlog  = 1000
>> nstenergy   = 1000
>> nstxtcout   = 1000
>> xtc_precision   = 1000
>> xtc_grps= System
>>
>> ; 7.3.9 Neighbor Searching
>> nstlist = 10
>> ns_type = grid
>> pbc = xyz
>> rlist   = 1.2
>> rlistlong   = 1.4
>>
>> ; 7.3.10 Electrostatics
>> coulombtype = PME
>> rcoulomb= 1.2
>> fourierspacing  = 0.12
>> pme_order   = 4
>> ewald_rtol  = 1e-5
>>
>> ; 7.3.11 VdW
>> vdwtype = switch
>> rvdw= 1.2
>> rvdw-switch = 1.0
>>
>> DispCorr= Ener
>>
>>
>> ; 7.3.14 Temperature Coupling
>> tcoupl  = nose-hoover
>> tc_grps = system
>> tau_t   = 1.0
>> ref_t   = 300
>>
>> ; 7.3.15 Pressure Coupling
>> pcoupl  = parrinello-rahman
>> pcoupltype  = isotropic
>> tau_p   = 1.0
>> compressibility = 4.5e-5
>> ref_p   = 1.0
>>
>> gen_vel = yes
>> gen_temp= 300
>> gen_seed= 93873959697
>>
>> ; 7.3.18 Bonds
>> constraints = h-bonds
>> constraint_algorithm= LINCS
>> continuation= yes
>> lincs_order = 4
>> lincs_iter  = 1
>> lincs_warnangle = 30
>>
>> Note: Using CHARMM27 force field
>>
>> I didn't use the 'continuation' part here.
>> In the heating run I didn't put any constraints but in the production MD,
>> I
>> do apply constraints to the covalent bonds involving hydrogens. I just
>> want
>>
>
> The introduction of constraints explains the observed behavior.  You ran
> an unconstrained simulation, then at step 0 of the constrained simulation,
> the constraints have to be satisfied, which may introduce sudden movement
> in atomic positions, hence large velocities and a high temperature.  The
> rule of thumb I always use - if you're going to use constraints during
> production simulations, use constraints during equilibration.  I have seen
> several instances where unconstrained equilibration causes constrained
> simulations to later fail.
>
>
>  to test the ligand movement inside the protein cavity in different set of
>> initial velocities to get the feelings of how ligand is interacting with
>> certain residues.
>> So, then should I use these different velocity generating seeds during the
>> warm up step?
>>
>>
> That's an interesting question.  If you're warming from 0 -> 300 K, I
> don't know how grompp will generate velocities at 0 K, but regenerating
> velocities after warming seems to defeat the purpose of doing the warming
> at all, in my mind, since you're just going to re-randomize the entire
> system by doing so.
>
>
> -Justin
>
> --
> ==**==
>
> Justin A. Lemkul, Ph.D.
> Research Scientist
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin
>
> ==**==
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Searchbefore
>  posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read 
> http://www.gromacs.org/**Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www

[gmx-users] problems with GROMACS 4.6.2

2013-06-04 Thread Mark Abraham
Hi GROMACS users,

It's come to our attention that some changes we made in 4.6.2 to the way we
detect low-level hardware attributes sometimes affects mdrun performance.
Specifically:
* using real MPI will likely lead to lower performance than 4.6.1, if
GROMACS managing the setting of thread affinities would have been a good
thing
* on Mac OS, mdrun might hang completely
If the simulation runs, then it's correct :-)

We've identified fixes and will release 4.6.3 in the next few days. In the
meantime, we suggest not bothering to install 4.6.2.

Apologies,

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU problem

2013-06-04 Thread Szilárd Páll
"-nt" is mostly a backward compatibility option and sets the total
number of threads (per rank). Instead, you should set both "-ntmpi"
(or -np with MPI) and "-ntomp". However, note that unless a single
mdrun uses *all* cores/hardware threads on a node, it won't pin the
threads to cores. Failing to pin threads will lead to considerable
performance degradation; just tried and depending on how (un)lucky the
thread placement and migration is, I get 1.5-2x performance
degradation with running two mdrun-s on a single dual-socket node
without pining threads.

My advise is (yet again) that you should check the
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
wiki page, in particular the section on how to run simulations. If
things are not, clear please ask for clarification - input and
constructive criticism should help us improve the wiki.

We have been patiently pointing everyone to the wiki, so asking
without reading up first is neither productive nor really fair.

Cheers,
--
Szilárd


On Tue, Jun 4, 2013 at 11:22 AM, Chandan Choudhury  wrote:
> Hi Albert,
>
> I think using -nt flag (-nt=16) with mdrun would solve your problem.
>
> Chandan
>
>
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
>
>
> On Tue, Jun 4, 2013 at 12:56 PM, Albert  wrote:
>
>> Dear:
>>
>>  I've got four GPU in one workstation. I am trying to run two GPU job with
>> command:
>>
>> mdrun -s md.tpr -gpu_id 01
>> mdrun -s md.tpr -gpu_id 23
>>
>> there are 32 CPU in this workstation. I found that each job trying to use
>> the whole CPU, and there are 64 sub job when these two GPU mdrun submitted.
>>  Moreover, one of the job stopped after short of running, probably because
>> of the CPU issue.
>>
>> I am just wondering, how can we distribute CPU when we run two GPU job in
>> a single workstation?
>>
>> thank you very much
>>
>> best
>> Albert
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/**mailman/listinfo/gmx-users
>> * Please search the archive at http://www.gromacs.org/**
>> Support/Mailing_Lists/Searchbefore
>>  posting!
>> * Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read 
>> http://www.gromacs.org/**Support/Mailing_Lists
>>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problems with GROMACS 4.6.2

2013-06-04 Thread Szilárd Páll
Just a few minor details:

- You can set the affinities yourself through the job scheduler which
should give nearly identical results compared to the mdrun internal
affinity if you simply assign cores to mdrun threads in a sequential
order (or with an #physical cores stride if you want to use
HyperThreading);

- At low parallelization and without OpenMP (i.e. -ntomp 1) you may
not notice much regression even though threads won't be pinned.

--
Szilárd


On Tue, Jun 4, 2013 at 7:05 PM, Mark Abraham  wrote:
> Hi GROMACS users,
>
> It's come to our attention that some changes we made in 4.6.2 to the way we
> detect low-level hardware attributes sometimes affects mdrun performance.
> Specifically:
> * using real MPI will likely lead to lower performance than 4.6.1, if
> GROMACS managing the setting of thread affinities would have been a good
> thing
> * on Mac OS, mdrun might hang completely
> If the simulation runs, then it's correct :-)
>
> We've identified fixes and will release 4.6.3 in the next few days. In the
> meantime, we suggest not bothering to install 4.6.2.
>
> Apologies,
>
> Mark
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: oxidized lipid - Peroxidated lipid

2013-06-04 Thread dariush
Dear All,
I need to use oxidized lipids in my system.
Any suggestion for force field that I can use would be appreciated.

Thanks,
Dariush 



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/oxidized-lipid-Peroxidated-lipid-tp5008661p5008810.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] ngmx not installed in gmx4.6.1

2013-06-04 Thread Dr. Vitaly Chaban
I do not know about the newest versions, but in older ones ngmx was missed
when you did not have the lesstif library installed.


Dr. Vitaly Chaban







On Tue, Jun 4, 2013 at 5:55 PM, Chandan Choudhury  wrote:

> Dear gmx users,
>
> I had installed gromacs 4.6.1 using cmake. All the binaries are installed,
> but surprisingly I do not find ngmx executable. Can anyone guide me how do
> I install ngmx using cmake.
>
> Chandan
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE:[gmx-users] GPU problem

2013-06-04 Thread lloyd riggs
 

Dear All or anyone,

 

A stupid question.  Is there an script anyone knows of to convert a 53a6ff from .top redirects to the gromacs/top directory to something like a ligand .itp?  This is usefull at the moment.  Example:

 

[bond]

    6 7 2    gb_5

 

to

 

[bonds]

; ai  aj  fu    c0, c1, ...

  6  7   2    0.139  1080.0    0.139  1080.0 ;   C  CH  

 

for everything (a protein/DNA complex) inclusive of angles, dihedrials?

 

Ive been playing with some of the gromacs user supplied files, but nothing yet.

 

Stephan Watkins
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Performance of Gromacs-4.6.1 on BlueGene/Q

2013-06-04 Thread Mark Abraham
On Tue, Jun 4, 2013 at 5:48 PM, Mark Abraham wrote:

>
>
>
> On Tue, Jun 4, 2013 at 4:50 PM, Jianguo Li  wrote:
>
>>
>>
>> Thank you, Mark and Xavier.
>>
>> The thing is that the cluster manager set the
>> minimum number of cores of each jobs in Bluegene/Q is 128, so I can not
>> use 64 cores. But according to the performance, 512 cores in Bluegene
>> roughly equivalent to 64 cores in another cluster. Since there are 16
>> cores in each computational cards, the total number of cores I used in
>> Bluegene//Q is num_cards times 16. So in my test, I acutally run
>> simulations using different number of cards, from 8 to 256.
>>
>> The following is the script I submitted to bluegene using 128
>> computational cards:
>>
>> #!/bin/sh
>> #SBATCH --nodes=128
>> # set Use 128 Compute Cards ( 1x Compute Card = 16 cores, 128x16 = 2048
>> cores )
>> #SBATCH --job-name="128x16x2"
>> # set Job name
>> #SBATCH -output="first-job-sample"
>> # set
>> Output file
>> #SBATCH --partition="training"
>>
>>
>> srun
>> --ntasks-per-node=32 --overcommit
>> /scratch/home/biilijg/package/gromacs-461/bin/mdrun -s box_md1.tpr -c
>> box_md1.gro -x box_md1.xtc -g md1.log >& job_md1
>>
>> Since bluegene/q accepts up to 4 tasks each
>> core, I used 32 mpi tasks for each card (2 task per core). I tried
>> --ntasks-per-node=64, but the simulations get much slower.
>> Is there a optimized number for --ntasks-per-node?
>>
>
> The threads per core thing will surely be useless for GROMACS. Even our
> unoptimized kernels will saturate the available flops. There simply is
> nothing to overlap, so you net lose from the extra overhead. You should aim
> at 16 threads per node, one for each A2 core. Each of those 16 need not be
> an MPI process, however.
>
> There's some general background info here
> http://www.gromacs.org/Documentation/Acceleration_and_parallelization.
> Relevant to BG/Q is that you will be using real MPI and should use OpenMP
> and the Verlet kernels (see
> http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Multi-level_parallelization.3a_MPI.2fthread-MPI_.2b_OpenMP).
> Finding the right balance of OpenMP threads per MPI process is hardware-
> and problem-dependent, so you will need to experiment there.
>
>
Thought I'd clarify further. A BG/Q node has 16 A2 cores. Some mix of MPI
and OpenMP threads across those will be right for GROMACS. Each core is
capable of running up to four "hardware threads." The processor in each
core can only issue two instructions per cycle, one flop and one non-flop,
but only to two different hardware threads. There is a theoretical speedup
from using more than one hardware thread, since you get to take advantage
of more instruction-issue opportunities. But doing so with more MPI
processes will incur other overhead (e.g. from PME global communication, as
well as pure-MPI overhead). Even if you can map the extra hardware threads
to OpenMP threads, you will only be able to get some fraction of the
speedup depending on available registers and bandwidth from cache (and you
still pay some extra overhead for the OpenMP). How big these effects are
depend whether you are running PME, and which of the kernels you are
actually executing. So it might be worth investigating 2 hardware threads
per core using OpenMP, but don't expect to want to write home about the
results! :-)

Cheers,

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] High Initial generated Temp

2013-06-04 Thread Justin Lemkul



On 6/4/13 12:51 PM, tarak karmakar wrote:

Yeah!
It is indeed a silly point to generate a velocity distribution at 0 K. (
Maxwell-Boltzmann will be in trouble)
After the warm up, now let say my protein is in 300 K, can't I generate a
velocity distribution at 300 k (using the keyword gen_vel = yes, gen_temp =
300 K gen_seed = 173529 ) during my production run?



You can generate velocities whenever you like, but you'll have to allow for some 
time to equilibrate, so what you're calling "production" isn't entirely 
production because it's no longer equilibrated in any real sense.  The heating 
phase may help to optimize the initial geometry, but regenerating velocities may 
screw everything up if something becomes unstable.  Assuming restraints are off 
during "production," then you can screw up the geometry of your system if 
something gets an unpleasant kick from the new velocities.


-Justin


On Tue, Jun 4, 2013 at 10:10 PM, Justin Lemkul  wrote:




On 6/4/13 12:17 PM, tarak karmakar wrote:


Thanks Justin.
Sorry for not uploading the full .mdp. Here it is,

; 7.3.3 Run Control
integrator  = md
tinit   = 0
dt  = 0.001
nsteps  = 500
nstcomm = 1
comm_grps   = system
comm_mode   = linear
energygrps  = system

; 7.3.8 Output Control
nstxout = 5000
nstfout = 5000
nstlog  = 1000
nstenergy   = 1000
nstxtcout   = 1000
xtc_precision   = 1000
xtc_grps= System

; 7.3.9 Neighbor Searching
nstlist = 10
ns_type = grid
pbc = xyz
rlist   = 1.2
rlistlong   = 1.4

; 7.3.10 Electrostatics
coulombtype = PME
rcoulomb= 1.2
fourierspacing  = 0.12
pme_order   = 4
ewald_rtol  = 1e-5

; 7.3.11 VdW
vdwtype = switch
rvdw= 1.2
rvdw-switch = 1.0

DispCorr= Ener


; 7.3.14 Temperature Coupling
tcoupl  = nose-hoover
tc_grps = system
tau_t   = 1.0
ref_t   = 300

; 7.3.15 Pressure Coupling
pcoupl  = parrinello-rahman
pcoupltype  = isotropic
tau_p   = 1.0
compressibility = 4.5e-5
ref_p   = 1.0

gen_vel = yes
gen_temp= 300
gen_seed= 93873959697

; 7.3.18 Bonds
constraints = h-bonds
constraint_algorithm= LINCS
continuation= yes
lincs_order = 4
lincs_iter  = 1
lincs_warnangle = 30

Note: Using CHARMM27 force field

I didn't use the 'continuation' part here.
In the heating run I didn't put any constraints but in the production MD,
I
do apply constraints to the covalent bonds involving hydrogens. I just
want



The introduction of constraints explains the observed behavior.  You ran
an unconstrained simulation, then at step 0 of the constrained simulation,
the constraints have to be satisfied, which may introduce sudden movement
in atomic positions, hence large velocities and a high temperature.  The
rule of thumb I always use - if you're going to use constraints during
production simulations, use constraints during equilibration.  I have seen
several instances where unconstrained equilibration causes constrained
simulations to later fail.


  to test the ligand movement inside the protein cavity in different set of

initial velocities to get the feelings of how ligand is interacting with
certain residues.
So, then should I use these different velocity generating seeds during the
warm up step?



That's an interesting question.  If you're warming from 0 -> 300 K, I
don't know how grompp will generate velocities at 0 K, but regenerating
velocities after warming seems to defeat the purpose of doing the warming
at all, in my mind, since you're just going to re-randomize the entire
system by doing so.


-Justin

--
==**==

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin

==**==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchbefore
 posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists



--
==

Re: [gmx-users] Re: oxidized lipid - Peroxidated lipid

2013-06-04 Thread Justin Lemkul


On 6/4/13 3:10 PM, dariush wrote:

Dear All,
I need to use oxidized lipids in my system.
Any suggestion for force field that I can use would be appreciated.



Googling turns up useful stuff like 
http://www.sciencedirect.com/science/article/pii/S0006349507716752.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU problem

2013-06-04 Thread Justin Lemkul



On 6/4/13 3:52 PM, lloyd riggs wrote:

Dear All or anyone,
A stupid question.  Is there an script anyone knows of to convert a 53a6ff from
.top redirects to the gromacs/top directory to something like a ligand .itp?
This is usefull at the moment.  Example:
[bond]
 6 7 2gb_5
to
[bonds]
; ai  aj  fuc0, c1, ...
   6  7   20.139  1080.00.139  1080.0 ;   C  CH
for everything (a protein/DNA complex) inclusive of angles, dihedrials?
Ive been playing with some of the gromacs user supplied files, but nothing yet.


Sounds like something grompp -pp should take care of.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Aw: Re: [gmx-users] GPU problem

2013-06-04 Thread lloyd riggs
 

Thanks, thats exact what I was looking for.

 

Stephan


Gesendet: Dienstag, 04. Juni 2013 um 22:28 Uhr
Von: "Justin Lemkul" 
An: "Discussion list for GROMACS users" 
Betreff: Re: [gmx-users] GPU problem



On 6/4/13 3:52 PM, lloyd riggs wrote:
> Dear All or anyone,
> A stupid question. Is there an script anyone knows of to convert a 53a6ff from
> .top redirects to the gromacs/top directory to something like a ligand .itp?
> This is usefull at the moment. Example:
> [bond]
> 6 7 2 gb_5
> to
> [bonds]
> ; ai aj fu c0, c1, ...
> 6 7 2 0.139 1080.0 0.139 1080.0 ; C CH
> for everything (a protein/DNA complex) inclusive of angles, dihedrials?
> Ive been playing with some of the gromacs user supplied files, but nothing yet.

Sounds like something grompp -pp should take care of.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing list gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Re: pdb2gmx

2013-06-04 Thread cyberjhon
Hi Valentina

The first lines of your rtp files should be something like this 
where you specify the type (function) of the interaction

[ bondedtypes ] 
; bonds  angles  dihedrals  impropers all_dihedrals nrexcl HH14 RemoveDih 
 1   5  921   3  1 0 

John



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/pdb2gmx-tp5008799p5008816.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] pdb2gmx

2013-06-04 Thread Justin Lemkul



On 6/4/13 12:05 PM, Valentina Erastova wrote:

Hello all,

I am converting a .pdb into .gro and .top and assigning ClayFF forcefield, that 
is not included in the gromacs.

It's a strange FF as doesn't have ones between all of the atoms, just O-H but 
it has an angle between a metal and OH, therefore I wore out molecule.rtp (see 
below) that included the angles, BUT wen I convert to the .top I only get 
bonds, but not angles (see below). Is there something I am missing or does 
gromacs not assign angles if no bonds are assigned?



That's probably correct, given that normally pdb2gmx will build angles based on 
bonds that are present.  Most "normal" usage will never have angles without 
bonds, so for now, you will have to write the angles in the topology yourself. 
I will file a bug report with a (much simpler) test case so we don't lose track 
of this issue.  Given that it affects so few simulations, it may be a while 
before it is resolved, but it may well be worth doing simply for flexibility 
reasons.


-Justin


Thank you!



molecule.rtp
[ B31 ] ;# ldh21
   [ atoms ]
  ; atomname atomtype charge chargegroup
; charges were taken from paper by Kuang Wu and J R Schmidt; J Phys Chem C 
(2012)
; and edited to make it fir LDH31 model and produce +1
o1  ohs  -1.113   0
h1  ho 0.464   0
o2  ohs  -1.113   0
h2  ho 0.464   0
o3  ohs  -1.113   0
h3  ho 0.464   0
o4  ohs  -1.113   0
h4  ho 0.464   0
mg1 mgo   1.403   0
al1 ao1.983   0
o5  ohs  -1.113   0
h5  ho 0.464   0
o6  ohs  -1.113   0
h6  ho 0.464   0
o7  ohs  -1.113   0
h7  ho 0.464   0
o8  ohs   -1.113   0
h8  ho 0.464   0
mg2 mgo   1.403   0
mg3 mgo   1.403   0
   [ bonds ]
  ; atom1 atom2   parametersindex
o1 h1 ohs
o2 h2 ohs
o3 h3 ohs
o4 h4 ohs
o5 h5 ohs
o6 h6 ohs
o7 h7 ohs
o8 h8 ohs
   [ angles ]
  ;  aiajak   gromos type
mg1   o1   h1   moh
mg1   o8   h8   moh
mg1   o4   h4   moh
mg1   o6   h6   moh
mg1   o3   h3   moh
mg1   o5   h5   moh
al1   o3   h3   moh
al1   o5   h5   moh
al1   o2   h2   moh
al1   o7   h7   moh
al1   o4   h4   moh
al1   o6   h6   moh
mg2   o8   h8   moh
mg2   o2   h2   moh
mg2   o5   h5   moh
mg2   o4   h4   moh
mg2   o7   h7   moh
mg2   o1   h1   moh
mg3   o7   h7   moh
mg3   o1   h1   moh
mg3   o6   h6   moh
mg3   o3   h3   moh
mg3   o8   h8   moh
mg3   o2   h2   moh



ldh31.top
; Include forcefield parameters
#include "./ClayFF.ff/forcefield.itp"

[ moleculetype ]
; Namenrexcl
Other   3

[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass  typeB
chargeB  massB
; residue   1 B31 rtp B31  q +1.0
  1ohs  1B31 o1  1 -1.113 16   ; qtot 
-1.113
  2 ho  1B31 h1  1  0.464  1.008   ; qtot 
-0.649
  3ohs  1B31 o2  1 -1.113 16   ; qtot 
-1.762
  4 ho  1B31 h2  1  0.464  1.008   ; qtot 
-1.298
  5ohs  1B31 o3  1 -1.113 16   ; qtot 
-2.411
  6 ho  1B31 h3  1  0.464  1.008   ; qtot 
-1.947
  7ohs  1B31 o4  1 -1.113 16   ; qtot 
-3.06
  8 ho  1B31 h4  1  0.464  1.008   ; qtot 
-2.596
  9mgo  1B31mg1  1  1.403  24.31   ; qtot 
-1.193
 10 ao  1B31al1  1  1.983  26.98   ; qtot 
0.79
 11ohs  1B31 o5  1 -1.113 16   ; qtot 
-0.323
 12 ho  1B31 h5  1  0.464  1.008   ; qtot 
0.141
 13ohs  1B31 o6  1 -1.113 16   ; qtot 
-0.972
 14 ho  1B31 h6  1  0.464  1.008   ; qtot 
-0.508
 15ohs  1B31 o7  1 -1.113 16   ; qtot 
-1.621
 16 ho  1B31 h7  1  0.464  1.008   ; qtot 
-1.157
 17ohs  1B31 o8  1 -1.113 16   ; qtot 
-2.27
 18 ho  1B31 h8  1  0.464  1.008   ; qtot 
-1.806
 19mgo  1B31mg2  1  1.403  24.31   ; qtot 
-0.403
 20mgo  1B31mg3  1  1.403  24.31   ; qtot 1

[ bonds ]
;  aiaj functc0c1c2c3
 1 2 1ohs
 3 4 1ohs
 5 6 1ohs
 7 8 1ohs
1112 1ohs
1314 1ohs
1516 1ohs
1718 1ohs

; Include Position restraint file
#ifdef POSRES
#include "posre.itp"
#endif

[ system ]
; Name
God Rules Over Mankind, Animals, Cosmos and Such

[ molecules ]
; Compound#mols
Other   1--
gmx-users mailing listgmx-users@gromacs.org
ht

[gmx-users] About coulmb & Vanderrwalls cutoff

2013-06-04 Thread vidhya sankar
Dear Justin  Thank you for your Previous reply.
                                                                             As 
yo mailed me  The Defaut      Value for vandewalls and Electrostatics  in 
GROMOS96 53A6  is  PME option
s_type     = grid
 nstlist     = 5
 rlist       = 0.9
 rcoulomb    = 0.9
 rvdw        = 1.4     
But in          you  Umbrella sampling you are using    GROMOS96 53A6 FF there 
you quoted  different  Parameters  in your em.mdp file


nstlist = 1 ; Frequency to update the neighbor list and long range 
forces
ns_type = grid  ; Method to determine neighbor list (simple, grid)
rlist   = 1.4   ; Cut-off for making neighbor list (short range forces)
coulombtype = PME   ; Treatment of long range electrostatic interactions
rcoulomb= 1.4   ; Short-range electrostatic cut-off
rvdw= 1.4   ; Short-range Van der Waals cut-off
pbc = xyz 

can i Take 1.4 instead of 0.9 value 
oterwise is there is Any spcification according to system and BOx size
Thanks Inadvance
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About coulmb & Vanderrwalls cutoff

2013-06-04 Thread Justin Lemkul



On 6/4/13 9:48 PM, vidhya sankar wrote:

Dear Justin  Thank you for your Previous reply.
  
As yo mailed me  The Defaut  Value for vandewalls and Electrostatics  in 
GROMOS96 53A6  is  PME option
s_type = grid
  nstlist = 5
  rlist   = 0.9
  rcoulomb= 0.9
  rvdw= 1.4
But in  you  Umbrella sampling you are usingGROMOS96 53A6 FF there 
you quoted  different  Parameters  in your em.mdp file


nstlist = 1 ; Frequency to update the neighbor list and long range 
forces
ns_type = grid  ; Method to determine neighbor list (simple, grid)
rlist   = 1.4   ; Cut-off for making neighbor list (short range forces)
coulombtype = PME   ; Treatment of long range electrostatic interactions
rcoulomb= 1.4   ; Short-range electrostatic cut-off
rvdw= 1.4   ; Short-range Van der Waals cut-off
pbc = xyz

can i Take 1.4 instead of 0.9 value


The value of rcoulomb is a bit more flexible when using PME, but unless you have 
a basis for changing it (and having demonstrated that the change is harmless), 
you shouldn't deviate from the parameters given for the force field.



oterwise is there is Any spcification according to system and BOx size


The contents of the system and size of the box do not dictate the parameters 
that are set; the force field does.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About coulmb & Vanderrwalls cutoff

2013-06-04 Thread vidhya sankar
Dear Justin thank you for your previous  reply
                    How can i check using th value 1.4 is harmless  to My system
Through g_energy ouput (potential.xvg) can i check (graphically)
Thnaks In Advance
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] Difference between the electrostatic treatments PME/Cut-offs and Reaction Field

2013-06-04 Thread Dallas Warren
"Problem" solved.

Just as well I held off reporting the "issue" in full until I had explored 
everything, I would ended have look a bit stupid ;-)  But asking the initial 
question helped direct my thinking to the reason.

The issue was the difference in van der Waals cut off between the PME/Cut-off 
and Reaction Field methods that I was using, 0.9/0.9/0.9 vs 0.8/1.4/1.4  The 
difference was hidden in the results until you turned off Dispersion 
Correction, which was what was confusing me and final led to realizing what is 
going on.  My forehead hurt from slapping it after coming to the realization ...

It should also be noted (and obvious now that I actually look into it) that 
using dispersion correction results in both the latent heat of vapourisation 
and density of the alkanes being over estimated (for both Cut-off and Reaction 
Field, and by the same amount).

Catch ya,

Dr. Dallas Warren
Drug Discovery Biology
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 


> -Original Message-
> From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
> boun...@gromacs.org] On Behalf Of Mark Abraham
> Sent: Thursday, 30 May 2013 9:32 PM
> To: Vitaly Chaban; Discussion list for GROMACS users
> Subject: Re: [gmx-users] Difference between the electrostatic
> treatments PME/Cut-offs and Reaction Field
> 
> Things should be identical - any quantity computed from a zero charge
> has
> to be zero :-).
> 
> Mark
> 
> On Thu, May 30, 2013 at 1:26 PM, Dr. Vitaly Chaban
> wrote:
> 
> > Hmmm...
> >
> > And what does the observed difference look like, numerically?
> >
> >
> >
> >
> >
> > On Thu, May 30, 2013 at 10:14 AM, Mark Abraham
>  > >wrote:
> >
> > > No charges = no problem. You can trivially test this yourself with
> mdrun
> > > -rerun ;-)  Manual 4.1.4 talks about what RF is doing.
> > >
> > > Mark
> > >
> > >
> > > On Thu, May 30, 2013 at 6:38 AM, Dallas Warren
>  > > >wrote:
> > >
> > > > In a system that has no charges, should we observe a difference
> between
> > > > simulations using PME/Cut-offs or Reaction Field?
> > > >
> > > > >From my understanding there should not be, since there are no
> charges
> > > > which treatment you use shouldn't' make a difference.
> > > >
> > > > However, it does and I am trying to work out why.
> > > >
> > > > Any suggestions on the reason?
> > > >
> > > > What is it that Reaction Field is doing, does it influence
> anything
> > other
> > > > than long range charge interactions?
> > > >
> > > > Catch ya,
> > > >
> > > > Dr. Dallas Warren
> > > > Drug Discovery Biology
> > > > Monash Institute of Pharmaceutical Sciences, Monash University
> > > > 381 Royal Parade, Parkville VIC 3052
> > > > dallas.war...@monash.edu
> > > > +61 3 9903 9304
> > > > -
> > > > When the only tool you own is a hammer, every problem begins to
> > resemble
> > > a
> > > > nail.
> > > > --
> > > > gmx-users mailing listgmx-users@gromacs.org
> > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/Search before
> posting!
> > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > --
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Sea

[gmx-users] restraints on water oxygen atoms

2013-06-04 Thread Shima Arasteh
Dear gmx users,

I have a POPC/peptide/water/ions system. I ran NVT and then NPT on my system. 
I'd prefer to run the equilibrium steps with position restraints on water 
oxygen atoms, because the water molecules penetrate the lipid bilayer when 
running the equilibrium and I don't want it to happen.
I tried the NVT with position restraints on water by adding -DPOSRES_WATER to 
nvt.mdp file and editing the top file by changing the fc to 10.

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcx    fcy    fcz
   1    1   10   10   10
#endif

This edition turned into a better result.

Now I tried to put such a restraint on npt but the gromacs does not allow it by 
turning a fatal error:
A charge group moved too far between two domain decomposition steps.


npt.mdp file:
;NPT equlibration Dimer-POPC-Water - CHARMM36
define        = -DPOSRES_LIPID -DPOSRES -DPOSRES_WATER    ; P headgroups of 
POPC and Protein is position restrained (uses the posres.itp file information)
integrator  = md    ; leap-frog integrator
nsteps  =100 ; 1 * 100 = 1000 ps
dt  = 0.001 ; 1 fs
; Output control
nstxout = 2000   ; save coordinates every 0.2 ps
nstvout = 1000   ; save velocities every 0.2 ps
nstenergy   = 1000   ; save energies every 0.2 ps
nstlog  = 1000   ; update log file every 0.2 ps

continuation    = yes    ; first dynamics run
constraint_algorithm = lincs    ; holonomic constraints
constraints = h-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid  ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist   = 1.2   ; short-range neighborlist cutoff (in nm)
rlistlong   = 1.4
rcoulomb    = 1.2   ; short-range electrostatic cutoff (in nm)
rvdw    = 1.2   ; short-range van der Waals cutoff (in nm)
vdwtype = switch
rvdw_switch = 1.0
; Electrostatics
coulombtype = PME   ; Particle Mesh Ewald for long-range 
electrostatics
pme_order   = 4 ; cubic interpolation
fourierspacing  = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = Nose-Hoover ; modified Berendsen thermostat
tc-grps = Protein_POPC Water_Ces_CL        ; two coupling groups - more 
accurate
tau_t   = 0.5   0.5       ; time constant, in ps
ref_t   = 310   310    ; reference temperature, one for each group, in K
pcoupl  = Berendsen    ; no pressure coupling in NVT
pcoupltype  = semiisotropic
tau_p   = 4
ref_p   = 1.01325 1.01325
compressibility = 4.5e-5 4.5e-5

; Periodic boundary conditions
pbc = xyz   ; 3-D PBC
; Velocity generation
gen_vel = no   ; assign velocities from Maxwell distribution
;gen_temp    = 310   ; temperature for Maxwell distribution
;gen_seed    = -1    ; generate a random seed
nstcomm = 1
comm_mode   = Linear
comm_grps   = Protein_POPC Water_Ces_CL    
refcoord_scaling    = com



I am wondering how it is possible to prevent penetrating the water molecules 
through equilibrium? And how I can put the restraint in npt as well as nvt? 
Would you please help me in this issue please?




Sincerely,
Shima 
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] ngmx not installed in gmx4.6.1

2013-06-04 Thread Chandan Choudhury
Thanks David. It worked.

Chandan


--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Wed, Jun 5, 2013 at 1:02 AM, Dr. Vitaly Chaban wrote:

> I do not know about the newest versions, but in older ones ngmx was missed
> when you did not have the lesstif library installed.
>
>
> Dr. Vitaly Chaban
>
>
>
>
>
>
>
> On Tue, Jun 4, 2013 at 5:55 PM, Chandan Choudhury 
> wrote:
>
> > Dear gmx users,
> >
> > I had installed gromacs 4.6.1 using cmake. All the binaries are
> installed,
> > but surprisingly I do not find ngmx executable. Can anyone guide me how
> do
> > I install ngmx using cmake.
> >
> > Chandan
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists