Hi,
I see degrading scaling when going from 96 to 192 cores on a cray for my
system. I have periodic molecules and do umbrella sampling (with small
deviations from the reference) which might affect the performance. My code is
based on 4.5.5 without performance-critical modifications.
Erik
9 n
Hi,
With a fast network like Cray's you can easily get to 400-500 atoms/core
core with 4.5 (that's 400+ cores for your system), perhaps even further.
With 4.6 this improves quite a bit (up to 2-3x).
--
Szilárd
On Wed, Nov 7, 2012 at 5:19 PM, Erik Marklund wrote:
> Hi,
>
> Sure you can go beyo
Hi,
I usually go with openMPI. I think there are issues with LAM, but other
probably know more about that.
Erik
7 nov 2012 kl. 19.36 skrev Marcelo Depolo:
> Hi guys.
>
>
> I've tried your suggestions and had no sucess. I've got "no resources
> available" error but I checked out and 50% of th
Hi,
I looked at one of my old pbs jobs and noted that the following line works for
me:
#PBS -l nodes=X:ppn=Y
where X is the number of nodes and ppn is the number of cores per node. Perhaps
your batch system uses a different "dialect".
What errors do you get when you aim for more than one node?
Hi Erik,
That's good news! But i've got problems using more than one node per
simulation (24 cores per node in my cluster). I don't know if it's a
parallelization
script problem or anything else. I'm using PBS scripts like this:
*#!/bin/bash
#PBS -N Gromacs
#PBS -lselect=1:ncpus=24
#PBS -m ae
#
Hi,
Sure you can go beyond 24 cores. I'm currently simulating ~170 000 atoms on 192
cores at ~45 ns a day. with half the number of processors I get ~27 ns a day.
It will of course depend on the hardware, particular algorithms, run
parameters, and on the system details.
Erik
7 nov 2012 kl. 16.
6 matches
Mail list logo