Thanks for the response. On further investigation, the problem only seems
to occur in jobs running via MPI on our GPU-enabled nodes, even if the
simulation in question doesn't use GPUs. Re-compiling gromacs 4.6.3 without
CUDA support eliminates the memory-hogging behavior. However, I'd like to
actu
On Tue, Sep 17, 2013 at 2:04 AM, Kate Stafford wrote:
> Hi all,
>
> I'm trying to install and test gromacs 4.6.3 on our new cluster, and am
> having difficulty with MPI. Gromacs has been compiled against openMPI
> 1.6.5. The symptom is, running a very simple MPI process for any of the
> DHFR test
Hi all,
I'm trying to install and test gromacs 4.6.3 on our new cluster, and am
having difficulty with MPI. Gromacs has been compiled against openMPI
1.6.5. The symptom is, running a very simple MPI process for any of the
DHFR test systems:
orterun -np 2 mdrun_mpi -s topol.tpr
produces this open
3 matches
Mail list logo