[gmx-users] grompp for minimization: note & warning

2013-09-16 Thread shahab shariati
Dear Justin

Very thanks for your reply.

You said "I suspect your Gromacs version is somewhat outdated, as recent
versions account for periodicity when
making this check". I used 4.5.5 version of gromacs. What version of
gromacs is more appropriate for my case.

Based on your suggestion, I used -maxwarn option for grompp. Then I used
-nt 1 option for mdrun,
but this step takes too long and


Steepest Descents:
   Tolerance (Fmax)   =  1.0e+03
   Number of steps=5
Warning: 1-4 interaction between 434 and 407 at distance 3.023 which is
larger than the 1-4 table size 2.200 nm
These are ignored for the rest of the simulation
   This usually means your system is exploding,

if not, you should increase table-extension in your mdp file
or with user tables increase the table size

step 23: Water molecule starting at atom 10613 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.
Wrote pdb files with previous and current coordinates

Stepsize too small, or no change in energy.
Converged to machine precision,
but not to the requested precision Fmax < 1000

Double precision normally gives you higher accuracy.
You might need to increase your constraint accuracy, or turn
off constraints alltogether (set constraints = none in mdp file)

writing lowest energy coordinates.

Steepest Descents converged to machine precision in 2122 steps,
but did not reach the requested Fmax < 1000.
Potential Energy  =  1.4310875e+05
Maximum force =  2.7179752e+04 on atom 5271
Norm of force =  4.0253470e+02
--
my em.mdp file is as follows:

integrator= steep; Algorithm (steep = steepest descent
minimization)
emtol= 1000.0  ; Stop minimization when the maximum force <
1000.0 kJ/mol/nm
emstep  = 0.01  ; Energy step size
nsteps= 5  ; Maximum number of (minimization) steps to
perform

; Parameters describing how to find the neighbors of each atom
nstlist= 1; Frequency to update the neighbor list and
long range forces
ns_type= grid; Method to determine neighbor list (simple,
grid)
rlist= 1.2; Cut-off for making neighbor list (short range
forces)
coulombtype= PME; Treatment of long range electrostatic
interactions
rcoulomb= 1.2; Short-range electrostatic cut-off
rvdw= 1.2; Short-range Van der Waals cut-off
pbc= xyz ; Periodic Boundary Conditions
--
gro, edr, trr and lof file were created.

I increased emstep from 0.01 to 0.1 and I used constraints = none in mdp
file, but result are the same.

Is this minimization completely true?
Can I use created gro file of this minimization for next step
(equilibration)?

I am beginner in gromacs, please help me to resolve this problem.

Best wishes
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] question about installation parameters

2013-09-16 Thread mjyang
Dear GMX users,


 I have a question about the combination of the installation parameters. I 
compiled the fftw lib with --enable-sse2 and configured the gromacs with "cmake 
.. -DGMX_CPU_ACCELERATION=SSE4.1". I'd like to know if it is ok to use such a 
combination?

Many thanks.

Mingjun--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] question about installation parameters

2013-09-16 Thread Carsten Kutzner
Hi,

On Sep 16, 2013, at 11:23 AM, mjyang  wrote:

> Dear GMX users,
> 
> 
> I have a question about the combination of the installation parameters. I 
> compiled the fftw lib with --enable-sse2 and configured the gromacs with 
> "cmake .. -DGMX_CPU_ACCELERATION=SSE4.1". I'd like to know if it is ok to use 
> such a
> combination?
yes, for Gromacs the FFTW should always be compiled with SSE2. You can combine 
that with any
-DGMX_CPU_ACCELERATION setting you want, typically the best that is supported 
on your platform.

Best,
  Carsten


> 
> Many thanks.
> 
> Mingjun-- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] grompp for minimization: note & warning

2013-09-16 Thread shahab shariati
Dear Justin

About following warning in grompp using

WARNING 1 [file em.mdp]:
The sum of the two largest charge group radii (6.940482) is larger than
rlist (1.20)

You said "You probably have molecules split across PBC in the input
coordinate file. here's nothing wrong in that case"

I mention that I used input coordinate file from folloowing web site"

http://cmb.bio.uni-goettingen.de/cholmembranes.html. Structures in this
website were equilibrated 195 ns.

Nonetheless, has my input coordinate file problem?
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Standard errors

2013-09-16 Thread afsaneh maleki
Dear all



I would like to calculate the standard deviation (as the error bar) for
dV/dlanda.xvg file. I used g_analyze command as the following:



g_analyze   -ffree_bi_0.9.xvg  -av  average_0.9

I got:

set   average  *standard  deviation*   *std. dev.  /
sqrt(n-1)*…

SS16.053822e+01 3.062230e+01  1.936724e-02…

Is the amount of in third (standard deviation) or fourth column (std. dev.  /
sqrt(n-1) ) better than to use as the standard errors?

I want to draw dG/d lambda via lambda and show error bar for free energy.



Thanks in advance

Afsaneh
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] grompp for minimization: note & warning

2013-09-16 Thread Justin Lemkul



On 9/16/13 7:37 AM, shahab shariati wrote:

Dear Justin

About following warning in grompp using

WARNING 1 [file em.mdp]:
The sum of the two largest charge group radii (6.940482) is larger than
rlist (1.20)

You said "You probably have molecules split across PBC in the input
coordinate file. here's nothing wrong in that case"

I mention that I used input coordinate file from folloowing web site"

http://cmb.bio.uni-goettingen.de/cholmembranes.html. Structures in this
website were equilibrated 195 ns.

Nonetheless, has my input coordinate file problem?



I have no idea.  What file are you using?  Visual inspection will make it very 
obvious whether or not you have broken molecules.  My original guess was exactly 
that - a guess.  Based on the supposed size of the group in question, it seemed 
that it was probably a common membrane issue.  Maybe this is or is not the case.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] grompp for minimization: note & warning

2013-09-16 Thread Justin Lemkul



On 9/16/13 3:06 AM, shahab shariati wrote:

Dear Justin

Very thanks for your reply.

You said "I suspect your Gromacs version is somewhat outdated, as recent
versions account for periodicity when
making this check". I used 4.5.5 version of gromacs. What version of
gromacs is more appropriate for my case.



When in doubt, always upgrade to the newest version, which is currently 4.6.3. 
I can't remember when the issue was fixed.



Based on your suggestion, I used -maxwarn option for grompp. Then I used
-nt 1 option for mdrun,
but this step takes too long and


Steepest Descents:
Tolerance (Fmax)   =  1.0e+03
Number of steps=5
Warning: 1-4 interaction between 434 and 407 at distance 3.023 which is
larger than the 1-4 table size 2.200 nm
These are ignored for the rest of the simulation
   This usually means your system is exploding,

if not, you should increase table-extension in your mdp file
or with user tables increase the table size

step 23: Water molecule starting at atom 10613 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.
Wrote pdb files with previous and current coordinates

Stepsize too small, or no change in energy.
Converged to machine precision,
but not to the requested precision Fmax < 1000

Double precision normally gives you higher accuracy.
You might need to increase your constraint accuracy, or turn
off constraints alltogether (set constraints = none in mdp file)

writing lowest energy coordinates.

Steepest Descents converged to machine precision in 2122 steps,
but did not reach the requested Fmax < 1000.
Potential Energy  =  1.4310875e+05
Maximum force =  2.7179752e+04 on atom 5271
Norm of force =  4.0253470e+02


In this case, it is pretty clear that there is actually something wrong with the 
input coordinates.



--
my em.mdp file is as follows:

integrator= steep; Algorithm (steep = steepest descent
minimization)
emtol= 1000.0  ; Stop minimization when the maximum force <
1000.0 kJ/mol/nm
emstep  = 0.01  ; Energy step size
nsteps= 5  ; Maximum number of (minimization) steps to
perform

; Parameters describing how to find the neighbors of each atom
nstlist= 1; Frequency to update the neighbor list and
long range forces
ns_type= grid; Method to determine neighbor list (simple,
grid)
rlist= 1.2; Cut-off for making neighbor list (short range
forces)
coulombtype= PME; Treatment of long range electrostatic
interactions
rcoulomb= 1.2; Short-range electrostatic cut-off
rvdw= 1.2; Short-range Van der Waals cut-off
pbc= xyz ; Periodic Boundary Conditions
--
gro, edr, trr and lof file were created.

I increased emstep from 0.01 to 0.1 and I used constraints = none in mdp
file, but result are the same.



Increasing emstep does not make sense.  If anything, you should be decreasing it 
to try to take smaller steps and resolve clashes.  In any case, the output tells 
you where the maximum force is.  Fire up your favorite visualization software, 
look at that atom and the things around it, and figure out what is going on.



Is this minimization completely true?
Can I use created gro file of this minimization for next step
(equilibration)?



No, the forces are far too high to be useful.  Anything you do will simply 
crash.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: grompp for minimization: note & warning

2013-09-16 Thread Justin Lemkul



On 9/16/13 9:47 AM, shahab shariati wrote:

Dear Justin

My input coordinate file is a gro file (sys.gro). When I visualize that by VMD,
all things is ok and true.

After minimization, I obtained a gro file (em.gro). When I visualize that by
VMD, About 5 DOPC molecules leave bilayer structure and 1 DOPC molecule not only
leaves bilayer structure, but also some bonds were broken (there is a broken
molecule). Before, I mentioned that my input coordinate file (em.gro) is
obtained from reliable website. How to resolve this issue?

I attached 2 gro file (sys.gro and em.gro). Resolvation of this issue is very
critical and important for me. Please guide me about that.

Excuse me for sending this e-mail to personal address. I have to send 2 gro
files to you.


The proper protocol is to upload the files somewhere publicly accessible and 
post a download link.  I generally do not like large, unrequested attachments.


The sys.gro file is positioned within a box that is too large, a fact that is 
easily observable in VMD.  I suspect that the void space results in instability. 
 As for the charge group error from grompp, I still can see no reason for it.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: grompp for minimization: note & warning

2013-09-16 Thread shahab shariati
Dear Justin

Very very thanks for your quick reply.


> The sys.gro file is positioned within a box that is too large, a fact that is
> easily observable in VMD.  I suspect that the void space results in 
> instability.

If I positioned system in a smaller box, my problem  (instability) solved ???

> As for the charge group error from grompp, I still can see no reason for it.

In the first e-mail, I put charge groups, as you seen, they were ok and true.

Thus what is reason of charge group error from grompp.

Please guide me to resolve these issues.


Best wishes
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: grompp for minimization: note & warning

2013-09-16 Thread Justin Lemkul



On 9/16/13 10:52 AM, shahab shariati wrote:

Dear Justin

Very very thanks for your quick reply.


The sys.gro file is positioned within a box that is too large, a fact that is
easily observable in VMD.  I suspect that the void space results in instability.


If I positioned system in a smaller box, my problem  (instability) solved ???



I have no idea.  I don't know how the box was set or why it was set in that way. 
 It's certainly wrong, but it may not solve the underlying issue.  Visualize 
the area around the drug - there is severe atomic overlap between the lipids and 
the drug molecule.  Clearly it was not built in a manner that will be stable.



As for the charge group error from grompp, I still can see no reason for it.


In the first e-mail, I put charge groups, as you seen, they were ok and true.

Thus what is reason of charge group error from grompp.



As I said, I don't know.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: grompp for minimization: note & warning

2013-09-16 Thread shahab shariati
Dear Justin

Very very thanks for your quick reply.

> The sys.gro file is positioned within a box that is too large, a fact that is
> easily observable in VMD.  I suspect that the void space results in 
> instability.

If I positioned system in a smaller box, my problem  (instability) solved ???

> As for the charge group error from grompp, I still can see no reason for it.

In the first e-mail, I put charge groups, as you seen, they were ok and true.

Thus what is reason of charge group error from grompp.

Please guide me to resolve these issues.


Best wishes
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Cross compiling GROMACS 4.6.3 for native Xeon Phi, thread-mpi problem

2013-09-16 Thread PaulC
Hi,


I'm attempting to build GROMACS 4.6.3 to run entirely within a single Xeon
Phi (i.e. native) with either/both Intel MPI/OpenMP for parallelisation
within the single Xeon Phi.

I followed these instructions from Intel for cross compiling for Xeon Phi
with cmake:

http://software.intel.com/en-us/articles/cross-compilation-for-intel-xeon-phi-coprocessor-with-cmake

which includes setting:

export CC=icc
export CXX=icpc
export FC=ifort
export CFLAGS="-mmic"
export CXXFLAGS=$CFLAGS
export FFLAGS=$CFLAGS
export MPI_C=mpiicc
export MPI_CXX=mpiicpc

I then run cmake with:

cmake .. -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=ON -DGMX_THREAD_MPI=OFF
-DGMX_FFT_LIBRARY=mkl -DGMX_CPU_ACCELERATION=None
-DCMAKE_INSTALL_PREFIX=~/gromacs



Note -DGMX_THREAD_MPI=OFF. That seems to work fine (see attached
cmake_output.txt), particularly, it finds the MIC Intel MPI:

-- Found MPI_C:
/opt/intel/impi/4.1.1.036/mic/lib/libmpigf.so;/opt/intel/impi/4.1.1.036/mic/lib/libmpi.so;/opt/i
ntel/impi/4.1.1.036/mic/lib/libmpigi.a;/usr/lib64/libdl.so;/usr/lib64/librt.so;/usr/lib64/libpthread.so
-- Checking for MPI_IN_PLACE
-- Performing Test MPI_IN_PLACE_COMPILE_OK
-- Performing Test MPI_IN_PLACE_COMPILE_OK - Success
-- Checking for MPI_IN_PLACE - yes


When I run make everything trundles along fine until:

[ 20%] Building C object
src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/errhandler.c.o
[ 20%] Building C object
src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/tmpi_malloc.c.o
[ 22%] Building C object src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/atomic.c.o
[ 22%] Building C object
src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/pthreads.c.o
/tmp/iccQqtl2Vas_.s: Assembler messages:
/tmp/iccQqtl2Vas_.s:1773: Error: `sfence' is not supported on `k1om'
make[2]: *** [src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/pthreads.c.o] Error 1
make[1]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
make: *** [all] Error 2


Why is it still building thread_mpi given the -DGMX_THREAD_MPI=OFF at the
cmake invocation above?

Any suggestions how best to work around this?


Thanks,

Paul.

cmake_output.txt
  

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Cross-compiling-GROMACS-4-6-3-for-native-Xeon-Phi-thread-mpi-problem-tp5011212.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Help g_energy

2013-09-16 Thread Marcelo Vanean
Hello. I was calculating the viscosity of hexane through the Gromacs
command g_energy. Three files are generated: visco.xvg, evisco.xvg and
eviscoi.xvg. The file visco.xvg presents the shear viscosity and bulk, but
the value does not match the experimental. I used 8 ns simulation at
equilibrium. However, the file evisco.xvg has a value very close to the
experimental but has only a time of 2 ns (version 4.0.7). I want to know
what is present in the file eviscoi.xvg. Thank you.

http://help-gromacs.blogspot.com.br/2013/09/viscosidade-gromacs-407.html
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] TPI Results differ in v4.5.7 and v4.6.1

2013-09-16 Thread João M . Damas
I am sorry for the late follow-up on this subject.

My results using PME electrostatics do not show any differences between
versions 4.5.4 and 4.6.3 (I also checked with 4.6.1 and it gave the same
results):

https://www.dropbox.com/s/o2kswrw5eq8fcsp/plot_batch-1_pme.png

I hope that you, Niels, have found a solution to your problem, which
probably has a different origin from what I initially thought...

Best,
João


On Mon, Jul 1, 2013 at 1:29 PM, João M. Damas  wrote:

> I have run TPI using three versions (4.0.4, 4.5.4 and 4.6.1) and three
> different insertions particles: CH4 (uncharged), Cl- (negative) and Na+
> (positive). All TPI were run on the same simulations of SPC water and the
> three particles taken from GROMOS force-field. Reaction-field was used for
> electrostatics.
>
> https://www.dropbox.com/s/in66zx8t2mrprgt/plot_batch-1.png
>
> As you can see, for an uncharged particle all the versions provide the
> same result. The same does not happen when charged particles are inserted,
> which hint on problems with the electrostatics between 4.0.4 and 4.[56].X.
> Like when Niels reported for the Cut-off scheme, the reaction-field
> provides the same results for 4.5.4 and 4.6.1. Is it indeed a problem with
> PME?
>
> I still have not tested PME, but I think Niels' test foretells the results
> I'm going to get. Niels, can you confirm this issue is only happening with
> charged particles? And are you going to file an issue like Szilárd
> suggested?
>
> Best,
> João
>
>
>
>
>
>
> On Sat, Jun 29, 2013 at 5:21 PM, João M. Damas wrote:
>
>> Niels,
>>
>> Which force-field did you use? I guess an uncharged CH4 shouldn't be
>> giving different results for TPI when changing coulomb... Actually, coulomb
>> is turned off if there's no charge in the particles to insert, if I
>> remember the code correctly.
>>
>> João
>>
>>
>> On Mon, Jun 24, 2013 at 3:40 PM, Niels Müller  wrote:
>>
>>> Hi João,
>>>
>>> Indeed your instinct seems to be good! When switching the Coulomb-Type
>>> to Cut-Off, there doesn't seem to be a difference between 4.6 and 4.5.
>>> Apparently its an issue with the PME sum. We will investigate further.
>>>
>>>
>>> Am 24.06.2013 um 14:42 schrieb João M. Damas :
>>>
>>> > Niels,
>>> >
>>> > This is very interesting. At our group, a colleague of mine and I have
>>> also
>>> > identified differences in the TPI integrator between 4.0.X and 4.5.X,
>>> but
>>> > we still haven't had the time to report it properly, since we are
>>> using a
>>> > slightly modified version of the TPI algorithm.
>>> >
>>> > Instinctively, we were attributing it to some different behaviours in
>>> the
>>> > RF that are observed between those versions. We also know that the TPI
>>> > algorithm began allowing PME treatment from 4.5.X onwards, so maybe
>>> there
>>> > are some differences going on the electrostatics level? But, IIRC, no
>>> > modifications to the TPI code were on the release notes from 4.5.X to
>>> > 4.6.X...
>>> >
>>> > We'll try to find some time to report our findings as soon as possible.
>>> > Maybe they are related.
>>> >
>>> > Best,
>>> > João
>>> >
>>> >
>>> > On Mon, Jun 24, 2013 at 10:19 AM, Niels Müller  wrote:
>>> >
>>> >> Hi GMX Users,
>>> >>
>>> >> We are computing the chemical potential of different gas molecules in
>>> a
>>> >> polymer melt with the tpi integrator.
>>> >> The computations are done for CO2 and CH4.
>>> >> The previous computations were done with v4.5.5 or 4.5.7 and gave
>>> equal
>>> >> results.
>>> >>
>>> >> I recently switched to gromacs version 4.6.1, and the chemical
>>> potential
>>> >> computed by this version is shifted by a nearly constant factor,
>>> which is
>>> >> different for the two gas molecules.
>>> >> We are perplexed what causes this shift. Was there any change in the
>>> new
>>> >> version that affects the tpi integration? I will provide the mdp file
>>> we
>>> >> used below.
>>> >>
>>> >> The tpi integration is run on basis of the last 10 ns of a 30 ns NVT
>>> >> simulation with 'mdrun -rerun'.
>>> >>
>>> >> Best regards,
>>> >> Niels.
>>> >>
>>> >> #
>>> >> The mdp file:
>>> >> #
>>> >>
>>> >> ; VARIOUS PREPROCESSING OPTIONS
>>> >> cpp  = cpp
>>> >> include=
>>> >> define  =
>>> >>
>>> >> ; RUN CONTROL PARAMETERS
>>> >> integrator   = tpi
>>> >> ; Start time and timestep in ps
>>> >> tinit= 0
>>> >> dt   = 0.001
>>> >> nsteps   = 100
>>> >> ; For exact run continuation or redoing part of a run
>>> >> init_step= 0
>>> >> ; mode for center of mass motion removal
>>> >> comm-mode= Linear
>>> >>
>>> >> ; number of steps for center of mass motion removal
>>> >> nstcomm  = 1
>>> >> ; group(s) for center of mass motion removal
>>> >> comm-grps=
>>> >>
>>> >> ; LANGEVIN DYNAMICS OPTIONS
>>> >> ; Temperature, friction coefficient (amu/ps

Re: [gmx-users] Regarding g_sgangle index file

2013-09-16 Thread Teemu Murtola
Hello,

On Sun, Sep 15, 2013 at 5:05 PM, Venkat Reddy  wrote:

> I found g_sgangle is the suitable tool
> to calculate the angle between two cholesterol rings. But the problem is, I
> want to do this analysis for my whole system, which has 40 cholesterol
> molecules. Whereas I can input either two or three atoms in the g_sgangle
> index file. A quick surf through the gmx-archive yielded a suggestion like:
>
> "An index group should contain all (1,2) pairs such that the overall group
> size is a multiple of two.  The index group has to be in a particular
> order, like 1 2 1 2 etc.,"
>
> 
>
> I got the output index file in the same order.,i.e., R5 R0 R5 R0etc.
> But when I execute g_sgangle, it is saying something wrong with the index
> file. How to solve this error?
> How to organize the index file in a multiple of 2?
>

It is only possible to calculate a single angle in one invocation of
g_sgangle. You could script your calculation to run g_sgangle once for each
of your molecules, but that gets somewhat tedious. With the development
version (from the git master branch), you can use a much more powerful 'gmx
gangle' tool, which can calculate multiple angles in one go. As an added
bonus, you don't need to invoke g_select separately, but can simply provide
the selection to the tool.

Best regards,
Teemu
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Cross compiling GROMACS 4.6.3 for native Xeon Phi, thread-mpi problem

2013-09-16 Thread Szilárd Páll
On Mon, Sep 16, 2013 at 7:04 PM, PaulC  wrote:
> Hi,
>
>
> I'm attempting to build GROMACS 4.6.3 to run entirely within a single Xeon
> Phi (i.e. native) with either/both Intel MPI/OpenMP for parallelisation
> within the single Xeon Phi.
>
> I followed these instructions from Intel for cross compiling for Xeon Phi
> with cmake:
>
> http://software.intel.com/en-us/articles/cross-compilation-for-intel-xeon-phi-coprocessor-with-cmake
>
> which includes setting:
>
> export CC=icc
> export CXX=icpc
> export FC=ifort
> export CFLAGS="-mmic"
> export CXXFLAGS=$CFLAGS
> export FFLAGS=$CFLAGS
> export MPI_C=mpiicc
> export MPI_CXX=mpiicpc
>
> I then run cmake with:
>
> cmake .. -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=ON -DGMX_THREAD_MPI=OFF
> -DGMX_FFT_LIBRARY=mkl -DGMX_CPU_ACCELERATION=None
> -DCMAKE_INSTALL_PREFIX=~/gromacs
>
>
>
> Note -DGMX_THREAD_MPI=OFF. That seems to work fine (see attached
> cmake_output.txt), particularly, it finds the MIC Intel MPI:
>
> -- Found MPI_C:
> /opt/intel/impi/4.1.1.036/mic/lib/libmpigf.so;/opt/intel/impi/4.1.1.036/mic/lib/libmpi.so;/opt/i
> ntel/impi/4.1.1.036/mic/lib/libmpigi.a;/usr/lib64/libdl.so;/usr/lib64/librt.so;/usr/lib64/libpthread.so
> -- Checking for MPI_IN_PLACE
> -- Performing Test MPI_IN_PLACE_COMPILE_OK
> -- Performing Test MPI_IN_PLACE_COMPILE_OK - Success
> -- Checking for MPI_IN_PLACE - yes
>
>
> When I run make everything trundles along fine until:
>
> [ 20%] Building C object
> src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/errhandler.c.o
> [ 20%] Building C object
> src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/tmpi_malloc.c.o
> [ 22%] Building C object src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/atomic.c.o
> [ 22%] Building C object
> src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/pthreads.c.o
> /tmp/iccQqtl2Vas_.s: Assembler messages:
> /tmp/iccQqtl2Vas_.s:1773: Error: `sfence' is not supported on `k1om'
> make[2]: *** [src/gmxlib/CMakeFiles/gmx.dir/thread_mpi/pthreads.c.o] Error 1
> make[1]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
> make: *** [all] Error 2
>
>
> Why is it still building thread_mpi given the -DGMX_THREAD_MPI=OFF at the
> cmake invocation above?

Because these days thread-MPI not only provides a threading-based MPI
implementation for GROMACS, but also some functionality independent
from this very feature, namely efficient atomic operations and thread
affinity settings.

>
> Any suggestions how best to work around this?

[ FTFY: "Any suggestions how to *fix* this?" ]

What seems to be causing the trouble here is the atomics support.
While x86 normally supports the atomic memory fence operation, Xeon
Phi seems to be not so "normal" and apparently it does not. Now, if
you look at src/gmxlib/thread_mpi/pthreads.c:633 you'll see a
tMPI_Atomic_memory_barrier() which, for x86, is defined in
include/thread_mpi/atomic/gcc_x86.h:105 as
#define tMPI_Atomic_memory_barrier() __asm__ __volatile__("sfence;" :
: : "memory")
along some other atomic operations for icc among other compilers.
What's strange is that the build system checks whether it can compile
a dummy C file with the atomics stuff included (see
cmake/ThreadMPI.cmake). At first sight it seems that this should fail
already at cmake time and should disable the atomics, but apprently it
does not.

You have two options:
- Fix the problem by adding an #elif
MACRO_TO_CHECK_FOR_MIC_COMPILATION branch and implement an atomic
barrier using the appropriate MIC ASM instruction.
- Fix the atomics check such that the lack of atomics support in
thread-MPI on MIC is correctly reflected (see cmake/ThreadMPI.cmake:45
which compilescmake/TestAtomics.c). More concretely, the cmake test
should fail for MIC build which should result in the disabling of
atomics support (and hopefully no compile-time error).

I suspect that even the proper fix (first option) may be as simple as
a couple of lines worth of changes. Regardless of which option you
pick, I would really appreciate if you could upload your fix to
gerrit.gromacs.org. You could open an issue on redmine.gromacs.org if
you want this issue to be track-able.

Cheers,
--
Szilárd

PS: I hope you know that we have neither SIMD intrinsics support not
any reasonable accelerator-aware parallelization for MIC (yet), so
don't expect high performance.

>
> Thanks,
>
> Paul.
>
> cmake_output.txt
> 
>
> --
> View this message in context: 
> http://gromacs.5086.x6.nabble.com/Cross-compiling-GROMACS-4-6-3-for-native-Xeon-Phi-thread-mpi-problem-tp5011212.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_L

[gmx-users] Re: Seeking solution for the error "Atom OXT in residue TRP 323 was not found in rtp entry TRP with 24 atoms while sorting atoms".

2013-09-16 Thread Santhosh Kumar Nagarajan

This is the command I used
pdb2gmx -f protein.pdb -o processed.gro -water spce -ignh

And I used OPLS-AA/L all-atom force field..



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Seeking-solution-for-the-error-Atom-OXT-in-residue-TRP-323-was-not-found-in-rtp-entry-TRP-with-24-at-tp5011015p5011219.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Seeking solution for the error "Atom OXT in residue TRP 323 was not found in rtp entry TRP with 24 atoms while sorting atoms".

2013-09-16 Thread Tsjerk Wassenaar
Hi Santhosh,

Try renaming the atom (mind the space):

sed 's/OXT/O2 /' pdbfile > fixed.pdb

And then run pdb2gmx on that.

Cheers,

Tsjerk


On Tue, Sep 17, 2013 at 6:05 AM, Santhosh Kumar Nagarajan <
santhoshraja...@gmail.com> wrote:

>
> This is the command I used
> pdb2gmx -f protein.pdb -o processed.gro -water spce -ignh
>
> And I used OPLS-AA/L all-atom force field..
>
>
>
> --
> View this message in context:
> http://gromacs.5086.x6.nabble.com/Seeking-solution-for-the-error-Atom-OXT-in-residue-TRP-323-was-not-found-in-rtp-entry-TRP-with-24-at-tp5011015p5011219.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
Tsjerk A. Wassenaar, Ph.D.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Regarding g_sgangle index file

2013-09-16 Thread Venkat Reddy
Thank you sir for the nice tip. I will try it out and let you know if I
have any problem.


On Tue, Sep 17, 2013 at 12:38 AM, Teemu Murtola wrote:

> Hello,
>
> On Sun, Sep 15, 2013 at 5:05 PM, Venkat Reddy  wrote:
>
> > I found g_sgangle is the suitable tool
> > to calculate the angle between two cholesterol rings. But the problem
> is, I
> > want to do this analysis for my whole system, which has 40 cholesterol
> > molecules. Whereas I can input either two or three atoms in the g_sgangle
> > index file. A quick surf through the gmx-archive yielded a suggestion
> like:
> >
> > "An index group should contain all (1,2) pairs such that the overall
> group
> > size is a multiple of two.  The index group has to be in a particular
> > order, like 1 2 1 2 etc.,"
> >
> > 
> >
> > I got the output index file in the same order.,i.e., R5 R0 R5 R0etc.
> > But when I execute g_sgangle, it is saying something wrong with the index
> > file. How to solve this error?
> > How to organize the index file in a multiple of 2?
> >
>
> It is only possible to calculate a single angle in one invocation of
> g_sgangle. You could script your calculation to run g_sgangle once for each
> of your molecules, but that gets somewhat tedious. With the development
> version (from the git master branch), you can use a much more powerful 'gmx
> gangle' tool, which can calculate multiple angles in one go. As an added
> bonus, you don't need to invoke g_select separately, but can simply provide
> the selection to the tool.
>
> Best regards,
> Teemu
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
With Best Wishes
Venkat Reddy Chirasani
PhD student
Laboratory of Computational Biophysics
Department of Biotechnology
IIT Madras
Chennai
INDIA-600036
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] How to restart the crashed run

2013-09-16 Thread Mahboobeh Eslami
hi my friends 

please help me

i did 20 ns simulation by gromacs 4.5.5 but the power was shut down near the 
end of the simulation
How to restart the crashed run?

 in the gromacs.org following comment has been proposed 

mdrun -s topol.tpr -cpi state.cpt

but i don't have state.cpt in my folder.

I need urgent help
Thank you very much

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Difficulties with MPI in gromacs 4.6.3

2013-09-16 Thread Kate Stafford
Hi all,

I'm trying to install and test gromacs 4.6.3 on our new cluster, and am
having difficulty with MPI. Gromacs has been compiled against openMPI
1.6.5. The symptom is, running a very simple MPI process for any of the
DHFR test systems:

orterun -np 2 mdrun_mpi -s topol.tpr

produces this openMPI warning:

--
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.

The process that invoked fork was:

  Local host:  hb0c1n1.hpc (PID 58374)
  MPI_COMM_WORLD rank: 1

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--

...which is immediately followed by program termination by the cluster
queue due to exceeding the allotted memory for the job. This behavior
persists no matter how much memory I use, up to 16GB per thread, which is
surely excessive for any of the DHFR benchmarks. Turning the warning off,
of course, simply suppresses the output, but doesn't affect the memory
usage.

The openMPI install works fine with other MPI-enabled programs, including
gromacs 4.5.5, so the problem is specific to 4.6.3. The thread-MPI version
of 4.6.3 is also fine.

The 4.6.3 MPI executable was compiled with:

cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/nfs/apps/cuda/5.5.22
-DGMX_MPI=ON -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON

But the presence of the GPU or static libs related flags seems not to
affect the behavior. The gcc version (4.4 or 4.8) doesn't matter either.

Any insight as to what I'm doing wrong here?

Thanks!

-Kate
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] How to restart the crashed run

2013-09-16 Thread Mark Abraham
http://www.gromacs.org/Documentation/How-tos/Doing_Restarts suggests
the 3.x-era restart strategy when checkpoint files are unavailable.
But if you simply have no output files, then you have no ability to
restart.

Mark

On Tue, Sep 17, 2013 at 1:59 AM, Mahboobeh Eslami
 wrote:
> hi my friends
>
> please help me
>
> i did 20 ns simulation by gromacs 4.5.5 but the power was shut down near the 
> end of the simulation
> How to restart the crashed run?
>
>  in the gromacs.org following comment has been proposed
>
> mdrun -s topol.tpr -cpi state.cpt
>
> but i don't have state.cpt in my folder.
>
> I need urgent help
> Thank you very much
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Difficulties with MPI in gromacs 4.6.3

2013-09-16 Thread Mark Abraham
On Tue, Sep 17, 2013 at 2:04 AM, Kate Stafford  wrote:
> Hi all,
>
> I'm trying to install and test gromacs 4.6.3 on our new cluster, and am
> having difficulty with MPI. Gromacs has been compiled against openMPI
> 1.6.5. The symptom is, running a very simple MPI process for any of the
> DHFR test systems:
>
> orterun -np 2 mdrun_mpi -s topol.tpr
>
> produces this openMPI warning:
>
> --
> An MPI process has executed an operation involving a call to the
> "fork()" system call to create a child process.  Open MPI is currently
> operating in a condition that could result in memory corruption or
> other system errors; your MPI job may hang, crash, or produce silent
> data corruption.  The use of fork() (or system() or other calls that
> create child processes) is strongly discouraged.
>
> The process that invoked fork was:
>
>   Local host:  hb0c1n1.hpc (PID 58374)
>   MPI_COMM_WORLD rank: 1
>
> If you are *absolutely sure* that your application will successfully
> and correctly survive a call to fork(), you may disable this warning
> by setting the mpi_warn_on_fork MCA parameter to 0.
> --

Hmm. That warning is a known issue in some cases:
http://www.open-mpi.org/faq/?category=openfabrics#ofa-fork but should
not be an issue for the above mdrun command, since it should call none
of popen/fork/system. You might like to try some of the diagnostics on
that page.

> ...which is immediately followed by program termination by the cluster
> queue due to exceeding the allotted memory for the job. This behavior
> persists no matter how much memory I use, up to 16GB per thread, which is
> surely excessive for any of the DHFR benchmarks. Turning the warning off,
> of course, simply suppresses the output, but doesn't affect the memory
> usage.

I can think of no reason for or past experience of this behaviour. Is
it possible for you to run mdrun_mpi in a debugger and get a call
stack trace to help us diagnose?

> The openMPI install works fine with other MPI-enabled programs, including
> gromacs 4.5.5, so the problem is specific to 4.6.3. The thread-MPI version
> of 4.6.3 is also fine.

OK, thanks, good diagnosis. Some low-level stuff did get refactored
after 4.6.1. I don't think that will be the issue here, but you could
see if it produces the same symptoms / magically works.

> The 4.6.3 MPI executable was compiled with:
>
> cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/nfs/apps/cuda/5.5.22
> -DGMX_MPI=ON -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON
>
> But the presence of the GPU or static libs related flags seems not to
> affect the behavior. The gcc version (4.4 or 4.8) doesn't matter either.
>
> Any insight as to what I'm doing wrong here?

So far I'd say the problem is not of your making :-(

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Seeking solution for the error "Atom OXT in residue TRP 323 was not found in rtp entry TRP with 24 atoms while sorting atoms".

2013-09-16 Thread Santhosh Kumar Nagarajan
I have tried it Tsjerk.. But the same error is shown again..

-
Santhosh Kumar Nagarajan
MTech Bioinformatics
SRM University
Chennai
India
--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Seeking-solution-for-the-error-Atom-OXT-in-residue-TRP-323-was-not-found-in-rtp-entry-TRP-with-24-at-tp5011015p5011224.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Standard errors

2013-09-16 Thread Mark Abraham
Standard error and standard deviation measure different things. Please
consult a general work on reporting scientific results.

Mark

On Mon, Sep 16, 2013 at 7:40 AM, afsaneh maleki
 wrote:
> Dear all
>
>
>
> I would like to calculate the standard deviation (as the error bar) for
> dV/dlanda.xvg file. I used g_analyze command as the following:
>
>
>
> g_analyze   -ffree_bi_0.9.xvg  -av  average_0.9
>
> I got:
>
> set   average  *standard  deviation*   *std. dev.  /
> sqrt(n-1)*…
>
> SS16.053822e+01 3.062230e+01  1.936724e-02…
>
> Is the amount of in third (standard deviation) or fourth column (std. dev.  /
> sqrt(n-1) ) better than to use as the standard errors?
>
> I want to draw dG/d lambda via lambda and show error bar for free energy.
>
>
>
> Thanks in advance
>
> Afsaneh
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Seeking solution for the error "Atom OXT in residue TRP 323 was not found in rtp entry TRP with 24 atoms while sorting atoms".

2013-09-16 Thread Mark Abraham
Please answer all of Justin's questions. What is in the PDB file -
what should the C terminus be!

Mark

On Tue, Sep 17, 2013 at 2:27 AM, Santhosh Kumar Nagarajan
 wrote:
> I have tried it Tsjerk.. But the same error is shown again..
>
> -
> Santhosh Kumar Nagarajan
> MTech Bioinformatics
> SRM University
> Chennai
> India
> --
> View this message in context: 
> http://gromacs.5086.x6.nabble.com/Seeking-solution-for-the-error-Atom-OXT-in-residue-TRP-323-was-not-found-in-rtp-entry-TRP-with-24-at-tp5011015p5011224.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] How to restart the crashed run

2013-09-16 Thread Mark Abraham
Hi,

Please keep discussion on the mailing list.

If you have a .cpt file that is not called state.cpt, then you must
have asked for the checkpoint file to be named md.cpt in your original
mdrun command (e.g. with mdrun -cpo md). state.cpt is simply the
default filename (and mostly there is no reason to change that).
Simply use md.cpt, now that you have it :-)

Mark


On Tue, Sep 17, 2013 at 2:49 AM, Mahboobeh Eslami
 wrote:
> I have md.cpt but I don't have restart my run
> What is the purpose of the state.cpt file?
> thank you so much
>
> From: Mark Abraham 
> To: Mahboobeh Eslami ; Discussion list for
> GROMACS users 
> Sent: Tuesday, September 17, 2013 9:36 AM
> Subject: Re: [gmx-users] How to restart the crashed run
>
> http://www.gromacs.org/Documentation/How-tos/Doing_Restarts suggests
> the 3.x-era restart strategy when checkpoint files are unavailable.
> But if you simply have no output files, then you have no ability to
> restart.
>
> Mark
>
> On Tue, Sep 17, 2013 at 1:59 AM, Mahboobeh Eslami
>  wrote:
>> hi my friends
>>
>> please help me
>>
>> i did 20 ns simulation by gromacs 4.5.5 but the power was shut down near
>> the end of the simulation
>> How to restart the crashed run?
>>
>>  in the gromacs.org following comment has been proposed
>>
>> mdrun -s topol.tpr -cpi state.cpt
>>
>> but i don't have state.cpt in my folder.
>>
>> I need urgent help
>> Thank you very much
>>
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists