Dear users,
Although this topic has been extensively discussed
in the list previously, I am unclear about the solution
for the problem..
While running ligand in water simulation (EM) with RF-0
I get the following message:
--
Dear all,
I'm trying to continue a REMD simulation using gromacs4.5.5 under NPT
ensemble, and I got the following errors when I tried to use 2 cores per
replica:
"[node-ib-4.local:mpi_rank_25][error_sighandler] Caught error: Segmentation
fault (signal 11)
[node-ib-13.local:mpi_rank_63][error_sigh
Hi,
i was trying to generate topology for p-phenylene vinylene polymer for OPLS
forcefield using acpype . The itp file i got has the atomtype opls_x with
mass 0.00. Is there any way to rectify this?
After reading through how acpype works i found out this was one of the
possible errors but there wa
Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html
Is there any reason not to use their .mdp parameters for a membrane-protein
system?
Really? An what about gcc+mpi? should I expect any improvement?
On Thu, Nov 7, 2013 at 6:51 PM, Mark Abraham wrote:
> You will do much better with gcc+openmp than icc-openmp!
>
> Mark
>
>
> On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade >wrote:
>
> > Did it a few days ago. Not so much of a pr
I'd at least use RF! Use a cut-off consistent with the force field
parameterization. And hope the LIE correlates with reality!
Mark
On Nov 7, 2013 10:39 PM, "Williams Ernesto Miranda Delgado" <
wmira...@fbio.uh.cu> wrote:
> Thank you Mark
> What do you think about making a rerun on the trajectori
Dear Users,
I am using openSUSE 12.3 and try to use make_ndx and g_angle. When I try
the following command, there is an error message.
> ./make.ndx -f data.pdb
./make_ndx: error while loading shared libraries: libcudart.so.4:cannot open
shared object file: No such file or directory
Do
Thank you Mark
What do you think about making a rerun on the trajectories generated
previously with PME but this time using coulombtype: cut-off? Could you
suggest a cut off value?
Thanks again
Williams
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/g
On Thu, Nov 7, 2013 at 6:34 AM, James Starlight wrote:
> I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
> me the same performance
> mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v -deffnm md_CaM_test,
>
> mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v -deffnm md_CaM_test,
>
> Doest it b
Let's not hijack James' thread as your hardware is different from his.
On Tue, Nov 5, 2013 at 11:00 PM, Dwey Kauffman wrote:
> Hi Szilard,
>
>Thanks for your suggestions. I am indeed aware of this page. In a 8-core
> AMD with 1GPU, I am very happy about its performance. See below. My
Actual
You will do much better with gcc+openmp than icc-openmp!
Mark
On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade wrote:
> Did it a few days ago. Not so much of a problem here.
>
> But I compiled everything, including fftw, with it. The only error I got
> was that I should turn off the separable c
Hi Mark!
I think that this is the paper that you are referring to:
dx.doi.org/10.1021/ct900549r
Also for your reference, these are the settings that Justin recommended
using with CHARMM in gromacs:
vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2
As y
Did it a few days ago. Not so much of a problem here.
But I compiled everything, including fftw, with it. The only error I got
was that I should turn off the separable compilation, and that the user
must be in the group video.
My settings are (yes, I know it should go better with openmp, but open
If the long-range component of your electrostatics model is not
decomposable by group (which it isn't), then you can't use that with LIE.
See the hundreds of past threads on this topic :-)
Mark
On Thu, Nov 7, 2013 at 8:34 PM, Williams Ernesto Miranda Delgado <
wmira...@fbio.uh.cu> wrote:
> Hell
Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
switch differs from the GROMACS switch (Justin linked a paper here with the
CHARMM switch description a month or so back, but I don't have that link to
hand).
Mark
On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan wrote:
> Than
Thank you, Mark. I think that running it on CPUs is a safer choice at
present.
On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham wrote:
> Hi,
>
> It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
> original paper's coulomb settings can be taken with a grain of salt for use
>
Hello
I performed MD simulations of several Protein-ligand complexes and
solvated Ligands using PME for log range electrostatics. I want to
calculate the binding free energy using the LIE method, but when using
g_energy I only get Coul-SR. How can I deal with Ligand-environment long
range electrost
On 11/7/13 12:14 PM, pratibha wrote:
My protein contains metal ions which are parameterized only in gromos force
field. Since I am a newbie to MD simulations, it would be difficult for me
to parameterize those myself.
Can you please guide me as per my previous mail which out of the two
simulat
My protein contains metal ions which are parameterized only in gromos force
field. Since I am a newbie to MD simulations, it would be difficult for me
to parameterize those myself.
Can you please guide me as per my previous mail which out of the two
simulations should I consider more reliable-43a
Sounds like a non-GROMACS problem. I think you should explore configuring
OpenMPI correctly, and show you can run an MPI test program successfully.
Mark
On Thu, Nov 7, 2013 at 5:51 PM, niloofar niknam
wrote:
> Dear gromacs users
> I have installed gromacs 4.6.1 with cmake 2.8.12, fftw3.3.3 and
icc and CUDA is pretty painful. I'd suggest getting latest gcc.
Mark
On Thu, Nov 7, 2013 at 2:42 PM, wrote:
> Hi,
>
> I'm having trouble compiling v 4.6.3 with GPU support using CUDA 5.5.22.
>
> The configuration runs okay and I have made sure that I have set paths
> correctly.
>
> I'm getting
Dear gromacs users
I have installed gromacs 4.6.1 with cmake 2.8.12, fftw3.3.3 and openmpi-1.6.4
on a single machine with 8 cores(Red Hat Enterprise linux 6.1) . During openmpi
installation ( I used "make -jN") and also in gromacs installation ( I used
"make -j N" command), everything seemed ok
Hi,
It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
original paper's coulomb settings can be taken with a grain of salt for use
with PME - others' success in practice should be a guideline here. The good
news is that the default GROMACS PME settings are pretty good for a
Hi,
I'm having trouble compiling v 4.6.3 with GPU support using CUDA 5.5.22.
The configuration runs okay and I have made sure that I have set paths
correctly.
I'm getting errors:
$ make
[ 0%] Building NVCC (Device) object
src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generate
On 11/7/13 8:24 AM, Anirban wrote:
Hi ALL,
Is there any way to get the percentage of each secondary structural content
of a protein using do_dssp if I supply a single PDB to it?
The output of scount.xvg has the percentages, but it's also trivial to do it for
one snapshot. The contents of s
Hi ALL,
Is there any way to get the percentage of each secondary structural content
of a protein using do_dssp if I supply a single PDB to it?
And how to plot the data of the -sc output from do_dssp?
Any suggestion is welcome.
Regards,
Anirban
--
gmx-users mailing listgmx-users@gromacs.org
Dear All,
Any suggestions?
Thank you.
--
View this message in context:
http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gro
On 11/7/13 6:27 AM, Arunima Shilpi wrote:
Dear Sir
Presently I am working with the example file as given in the umbrella
sampling tutorial.
While running the following command
grompp -f npt_umbrella.mdp -c conf0.gro -p topol.top -n index.ndx -o npt0.tpr
I got the following error. How to debu
Dear Sir
Presently I am working with the example file as given in the umbrella
sampling tutorial.
While running the following command
grompp -f npt_umbrella.mdp -c conf0.gro -p topol.top -n index.ndx -o npt0.tpr
I got the following error. How to debug this error.
Ignoring obsolete mdp entry 't
On Wed, Nov 6, 2013 at 4:07 PM, fantasticqhl wrote:
> Dear Justin,
>
> I am sorry for the late reply. I still can't figure it out.
>
It isn't rocket science - your two .mdp files describe totally different
model physics. To compare things, change as few things as necessary to
generate the compar
I think either is correct for practical purposes.
Mark
On Thu, Nov 7, 2013 at 8:41 AM, Gianluca Interlandi <
gianl...@u.washington.edu> wrote:
> Does it make more sense to use nose-hoover or v-rescale when running in
> implicit solvent GBSA? I understand that this might be a matter of opinion.
First, there is no value in ascribing problems to the hardware if the
simulation setup is not yet balanced, or not large enough to provide enough
atoms and long enough rlist to saturate the GPUs, etc. Look at the log
files and see what complaints mdrun makes about things like PME load
balance, and
32 matches
Mail list logo