Hi,
On Oct 17, 2013, at 2:25 PM, pratibha kapoor wrote:
> Dear gromacs users
>
> I would like to run my simulations on all nodes(8) with full utilisation of
> all cores(2 each). I have compiled gromacs version 4.6.3 using both thread
> mpi and open mpi. I am using following command:
> mpirun -n
Dear gromacs users
I would like to run my simulations on all nodes(8) with full utilisation of
all cores(2 each). I have compiled gromacs version 4.6.3 using both thread
mpi and open mpi. I am using following command:
mpirun -np 8 mdrun_mpi -v -s -nt 2 -s *.tpr -c *.gro
But I am getting following
On Sat, Mar 16, 2013 at 1:50 AM, Sonia Aguilera <
sm.aguiler...@uniandes.edu.co> wrote:
> Hi!
>
> I have been running MD simulations on a 6 processors machine. I just got an
> account on a cluster. A nvt stabilization takes about 8 hours on my 6
> processors machine, but it takes about 12 hours on
Hi!
I have been running MD simulations on a 6 processors machine. I just got an
account on a cluster. A nvt stabilization takes about 8 hours on my 6
processors machine, but it takes about 12 hours on the cluster using 16
processors. It is my understanding that the idea of running in parallel is
t
Hi,
Here's a bit more explanation, hopefully a bit more practical and give for
you and others a better view of what's going on under mdrun's hood.
thread-MPI, or in other contexts referred to as "thread_mpi" or abbreviated
as "tMPI", is functionally equivalent with the standard MPI you'd use on
On Mon, Jan 21, 2013 at 11:50 PM, Brad Van Oosten wrote:
> I have been lost in the sea of terminology for installing gromacs with
> multi-processors. The plan is to upgrade from 4.5.5 to the 4.6 and i want
> the optimal install for my system. There is a a nice explanaion at
> http://www.gromac
I have been lost in the sea of terminology for installing gromacs with
multi-processors. The plan is to upgrade from 4.5.5 to the 4.6 and i
want the optimal install for my system. There is a a nice explanaion at
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
but the nu
ED]
To: gmx-users@gromacs.org
Subject: RE: [gmx-users] parallelization error? gromacs-4.0.2
Date: Thu, 20 Nov 2008 22:05:52 +0100
Hi,
Do you have anisotropic pressure coupling turned on?
Could you send me the tpr file?
Berk
Date: Thu, 20 Nov 2008 14:47:53 +
From: [EMAIL PROTECTED]
To: gmx-
Hi,
Do you have anisotropic pressure coupling turned on?
Could you send me the tpr file?
Berk
Date: Thu, 20 Nov 2008 14:47:53 +
From: [EMAIL PROTECTED]
To: gmx-users@gromacs.org
Subject: [gmx-users] parallelization error? gromacs-4.0.2
Hello,
I tried from the beginning to test gromacs
Hello,
I tried from the beginning to test gromacs-4.0.2 with a monoclinic system on 8
processors (one two quad core machine). The skew errors seem to be gone, yet
other errors appeared.
Now after a successful md, taking the output and trying to do annealing I get
the following error:
Fatal er
10 matches
Mail list logo