Zitat von "Jagan Mohan" <[EMAIL PROTECTED]>:
Hello,
Just wanted to know if GROMACS 3.3.3 supports multi threading that is can
run four instances of mdrun in the same machine which has 4 cores...
You can run GROMACS in parallel with MPI, which is much more
efficient. Any decent distribution ha
Hi Tiago,
if you swith off PME and suddenly your system scales, then the
problems are likely to result from bad MPI_Alltoall performance. Maybe
this is worth a check. If this is the case, there's a lot more information
about this in the paper "Speeding up parallel GROMACS on high-
latency network
Hello,
Just wanted to know if GROMACS 3.3.3 supports multi threading that is can
run four instances of mdrun in the same machine which has 4 cores...
Thanks in advance.
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/li
Hello Tsjerk,
while running pdb2gmx I was getting error like
h1 is not found while assigning improper dihedral
so I just modified the forcefield(ffG43a1.rtp)file by default it was
considering ADE also inplace of considering DADE part only,so I removed
ADE,CYT,GUA part also(is it right to do so?)
Morteza Khabiri wrote:
Dear gmx user
I have a box which contain protein and solution. Before running I
minimized it very well and also system is running. After 2 ns when I saw
the trajectory in vmd I saw that ther is some lines like bondes in the
system. eg, between oxygen group of one water m
Dear gmx user
I have a box which contain protein and solution. Before running I
minimized it very well and also system is running. After 2 ns when I saw
the trajectory in vmd I saw that ther is some lines like bondes in the
system. eg, between oxygen group of one water molecule in one part of box
Julio Benegas wrote:
Dear David,
We are sorry to bother you on perhaps a simple question, but elusive to us.
We want to run a MD simulation of three units of chitosan.
We are using Gromacs ffG53A6 force field and we can not find in the
corresponding library the parameters corresponding to the NH2
Take a look at the charm file: top_all22_prot.inp
Specifically, RESI HEME
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Ple
Piotr Adam Pieniazek wrote:
Hi,
I'm trying to put in a Urey-Bradly type term into my force field, but
the format is not specified in the manual.
atom_1 atom_2 atom_3 5 angle angle_force bond bond_force
I'm particularly confused about the last two terms. I've tried both
combinations and th
Hi,
I recently installed gromacs in single precision on an Altix4700 using
either intel 10.1.0008 or gcc 4.2.3 and in both cases the installation
passes all of the gmxtests except for angles125. The differences in the
energy appear to be rather significant (see below). Can anyone offer an
explan
Hi,
I am trying to simulate HEME in a water box. I generated HEME topology
based on the parameters in the ffG43a2.rtp file. When I run a short
burst of unrestrained dynamics, it does not remain planar. I understand
that there might be some buckling but not to the extent that I am
observing.
vivek sharma wrote:
Sorry for the incomplete mail...i sent it by mistake
what i want to add is I am not able to run it with any of the
option.any help and suggestion will be highly appreciated.
FYI size of my system is around 45000 atoms.
The performance will depend upon a lot of fact
Sorry for the incomplete mail...i sent it by mistake
what i want to add is I am not able to run it with any of the option.any
help and suggestion will be highly appreciated.
FYI size of my system is around 45000 atoms.
Thanks in advance,
Vivek
2008/9/25 vivek sharma <[EMAIL PROTECTED]>
> Hi
Hi friends,
I am also facing the similar problem when tried to scale gromacs for more
number of processors ,
I have tried one job using gromacs on EKA, in an attempt to scale it for
more number of processor I am able to get the reduction in simulation time
upto 20 processors, it is taking more time
We currently have no funds available to migrate to infiniband but we will in
the future.
I thought on doing interface bonding but I really think that isn't really
the problem here, there must be something I'm missing, since most
applications scale well to 32 cores on GbE. I can't scale any applica
Hmmm...
I don't have any .pdb file, what can I look for then?
Best regards,
Tiago Marques
On Tue, Sep 23, 2008 at 5:48 PM, Jochen Hub <[EMAIL PROTECTED]> wrote:
> Tiago Marques wrote:
> > I don't know how large the system is. I'm the cluster's system
> administrator
>
16 matches
Mail list logo