Re: [gmx-users] Multi-level parallelization: MPI + OpenMP

2013-07-22 Thread Éric Germaneau
Dear Szilárd, I'm making some tests using 2 ranks/node, what I was trying to do. It seems working now, thank you. Éric. On 07/19/2013 08:56 PM, Szilárd Páll wrote: Depending on the level of parallelization (number of nodes and number of particles/core) you may want to try: - 2 ranks/

Re: [gmx-users] Multi-level parallelization: MPI + OpenMP

2013-07-19 Thread Szilárd Páll
Depending on the level of parallelization (number of nodes and number of particles/core) you may want to try: - 2 ranks/node: 8 cores + 1 GPU, no separate PME (default): mpirun -np 2*Nnodes mdrun_mpi [-gpu_id 01 -npme 0] - 4 ranks per node: 4 cores + 1 GPU (shared between two ranks), no separat

Re: [gmx-users] Multi-level parallelization: MPI + OpenMP

2013-07-19 Thread Mark Abraham
What's the simplest case you can make work? Mark On Fri, Jul 19, 2013 at 8:38 AM, Éric Germaneau wrote: > I actually submitted using two MPI process per node but log files do not > get updated, it's like the calculation gets stuck. > > Here is how I proceed: > >mpirun -np $NM -machinefile n

Re: [gmx-users] Multi-level parallelization: MPI + OpenMP

2013-07-18 Thread Éric Germaneau
I actually submitted using two MPI process per node but log files do not get updated, it's like the calculation gets stuck. Here is how I proceed: mpirun -np $NM -machinefile nodegpu mdrun_mpi -nb gpu -v -deffnm test184000atoms_verlet.tpr >& mdrun_mpi.log with the content of /nodegpu/:

[gmx-users] Multi-level parallelization: MPI + OpenMP

2013-07-18 Thread Éric Germaneau
Dear all, I'm note a gromacs user, I've installed gromacs 4.6.3 on our cluster and making some test. Each node of our machine has 16 cores and 2 GPU. I'm trying to figure how to submit efficient multiple nodes LSF jobs using the maximum of resources. After reading the documentation