Re: [gmx-users] parallelization

2013-10-17 Thread Carsten Kutzner
Hi, On Oct 17, 2013, at 2:25 PM, pratibha kapoor wrote: > Dear gromacs users > > I would like to run my simulations on all nodes(8) with full utilisation of > all cores(2 each). I have compiled gromacs version 4.6.3 using both thread > mpi and open mpi. I am using following command: > mpirun -n

[gmx-users] parallelization

2013-10-17 Thread pratibha kapoor
Dear gromacs users I would like to run my simulations on all nodes(8) with full utilisation of all cores(2 each). I have compiled gromacs version 4.6.3 using both thread mpi and open mpi. I am using following command: mpirun -np 8 mdrun_mpi -v -s -nt 2 -s *.tpr -c *.gro But I am getting following

Re: [gmx-users] Parallelization performance

2013-03-16 Thread Mark Abraham
On Sat, Mar 16, 2013 at 1:50 AM, Sonia Aguilera < sm.aguiler...@uniandes.edu.co> wrote: > Hi! > > I have been running MD simulations on a 6 processors machine. I just got an > account on a cluster. A nvt stabilization takes about 8 hours on my 6 > processors machine, but it takes about 12 hours on

[gmx-users] Parallelization performance

2013-03-15 Thread Sonia Aguilera
Hi! I have been running MD simulations on a 6 processors machine. I just got an account on a cluster. A nvt stabilization takes about 8 hours on my 6 processors machine, but it takes about 12 hours on the cluster using 16 processors. It is my understanding that the idea of running in parallel is t

Re: [gmx-users] Parallelization scheme and terminology help

2013-01-23 Thread Szilárd Páll
Hi, Here's a bit more explanation, hopefully a bit more practical and give for you and others a better view of what's going on under mdrun's hood. thread-MPI, or in other contexts referred to as "thread_mpi" or abbreviated as "tMPI", is functionally equivalent with the standard MPI you'd use on

Re: [gmx-users] Parallelization scheme and terminology help

2013-01-21 Thread Mark Abraham
On Mon, Jan 21, 2013 at 11:50 PM, Brad Van Oosten wrote: > I have been lost in the sea of terminology for installing gromacs with > multi-processors. The plan is to upgrade from 4.5.5 to the 4.6 and i want > the optimal install for my system. There is a a nice explanaion at > http://www.gromac

[gmx-users] Parallelization scheme and terminology help

2013-01-21 Thread Brad Van Oosten
I have been lost in the sea of terminology for installing gromacs with multi-processors. The plan is to upgrade from 4.5.5 to the 4.6 and i want the optimal install for my system. There is a a nice explanaion at http://www.gromacs.org/Documentation/Acceleration_and_parallelization but the nu

RE: [gmx-users] parallelization error? gromacs-4.0.2

2008-11-20 Thread Berk Hess
ED] To: gmx-users@gromacs.org Subject: RE: [gmx-users] parallelization error? gromacs-4.0.2 Date: Thu, 20 Nov 2008 22:05:52 +0100 Hi, Do you have anisotropic pressure coupling turned on? Could you send me the tpr file? Berk Date: Thu, 20 Nov 2008 14:47:53 + From: [EMAIL PROTECTED] To: gmx-

RE: [gmx-users] parallelization error? gromacs-4.0.2

2008-11-20 Thread Berk Hess
Hi, Do you have anisotropic pressure coupling turned on? Could you send me the tpr file? Berk Date: Thu, 20 Nov 2008 14:47:53 + From: [EMAIL PROTECTED] To: gmx-users@gromacs.org Subject: [gmx-users] parallelization error? gromacs-4.0.2 Hello, I tried from the beginning to test gromacs

[gmx-users] parallelization error? gromacs-4.0.2

2008-11-20 Thread Claus Valka
Hello, I tried from the beginning to test gromacs-4.0.2 with a monoclinic system on 8 processors (one two quad core machine). The skew errors seem to be gone, yet other errors appeared. Now after a successful md, taking the output and trying to do annealing I get the following error: Fatal er