Re: [gmx-users] Difficulties with MPI in gromacs 4.6.3

2013-09-18 Thread Kate Stafford
Thanks for the response. On further investigation, the problem only seems to occur in jobs running via MPI on our GPU-enabled nodes, even if the simulation in question doesn't use GPUs. Re-compiling gromacs 4.6.3 without CUDA support eliminates the memory-hogging behavior. However, I'd like to actu

Re: [gmx-users] Difficulties with MPI in gromacs 4.6.3

2013-09-16 Thread Mark Abraham
On Tue, Sep 17, 2013 at 2:04 AM, Kate Stafford wrote: > Hi all, > > I'm trying to install and test gromacs 4.6.3 on our new cluster, and am > having difficulty with MPI. Gromacs has been compiled against openMPI > 1.6.5. The symptom is, running a very simple MPI process for any of the > DHFR test

[gmx-users] Difficulties with MPI in gromacs 4.6.3

2013-09-16 Thread Kate Stafford
Hi all, I'm trying to install and test gromacs 4.6.3 on our new cluster, and am having difficulty with MPI. Gromacs has been compiled against openMPI 1.6.5. The symptom is, running a very simple MPI process for any of the DHFR test systems: orterun -np 2 mdrun_mpi -s topol.tpr produces this open