Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-12 Thread Martin Höfling
Am Mittwoch 12 November 2008 06:18:14 schrieb vivek sharma: > Everybody thanks for your usefull suggestions.. > What do you mean by % imbalance reported in log file. I don't know how to > assign the specific load to PME, but I can see that around 37% of the > computation is being used by PME. > I

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread Mark Abraham
vivek sharma wrote: 2008/11/11 Justin A. Lemkul <[EMAIL PROTECTED] > vivek sharma wrote: HI MArtin, I am using here the infiniband having speed more than 10 gbps..Can you suggest some option to scale better in this case. What % imb

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread vivek sharma
2008/11/11 Justin A. Lemkul <[EMAIL PROTECTED]> > > > vivek sharma wrote: > >> HI MArtin, >> I am using here the infiniband having speed more than 10 gbps..Can you >> suggest some option to scale better in this case. >> >> > What % imbalance is being reported in the log file? What fraction of the

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread Christian Seifert
A page on the wiki with further information and hints would be nice. Topic: "improving performance with GMX4" or "Pimp my GMX4" ;-) The beta manualpage of mdrun (version4) is not very comprehensible/user friendly in my eyes. - Christian On Tue, 2008-11-11 at 09:12 -0500, Justin A. Lemkul wrote:

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread Justin A. Lemkul
vivek sharma wrote: HI MArtin, I am using here the infiniband having speed more than 10 gbps..Can you suggest some option to scale better in this case. What % imbalance is being reported in the log file? What fraction of the load is being assigned to PME, from grompp? How many processor

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread Mark Abraham
vivek sharma wrote: Hi All, one thing I forgot to mention I am getting here around 6 ns/day...for a protein of size around 2600 atoms.. Much more relevant is how much water... You can also be rate limited by I/O if you have poor hardware and/or are writing to disk excessively. Mark ___

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread vivek sharma
Hi All, one thing I forgot to mention I am getting here around 6 ns/day...for a protein of size around 2600 atoms.. With Thanks, Vivek 2008/11/11 vivek sharma <[EMAIL PROTECTED]> > HI MArtin, > I am using here the infiniband having speed more than 10 gbps..Can you > suggest some option to scale

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread vivek sharma
HI MArtin, I am using here the infiniband having speed more than 10 gbps..Can you suggest some option to scale better in this case. With Thanks, Vivek 2008/11/11 Martin Höfling <[EMAIL PROTECTED]> > Am Dienstag 11 November 2008 12:06:06 schrieb vivek sharma: > > > > I have also tried scaling gro

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread Carsten Kutzner
Hi Vivek, if you use separate PME nodes (-npme) then one group of the processors will calculate the long-range (reciprocal space) part while the remaining processors do the short-range (direct space) part of the Coulomb forces. The goal is to choose the number of nodes in both groups such

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread Martin Höfling
Am Dienstag 11 November 2008 12:06:06 schrieb vivek sharma: > I have also tried scaling gromacs for a number of nodes but was not > able to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor per As mentioned before, performance strongly depends on the type of interconnect you're

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread Justin A. Lemkul
vivek sharma wrote: Hi Carsten, I have also tried scaling gromacs for a number of nodes but was not able to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor per node.. I am not getting the point of optimizing PME for the number of nodes, is it like we can change the paramete

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-11 Thread vivek sharma
Hi Carsten, I have also tried scaling gromacs for a number of nodes but was not able to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor per node.. I am not getting the point of optimizing PME for the number of nodes, is it like we can change the parameters for PME for MDS or using

RE: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-10 Thread Mike Hanby
The fftw used during compilation was FFTW 3.1.2 compiled using the GNU compilers. From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yawar JQ Sent: Sunday, November 09, 2008 3:31 PM To: gmx-users@gromacs.org Subject: [gmx-users] Gromacs 4 Scaling Benchmarks... I was wondering

Re: [gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-10 Thread Carsten Kutzner
Hi, most likely the Ethernet is the problem here. I compiled some numbers for the DPPC benchmark in the paper "Speeding up parallel GROMACS on high-latency networks", http://www3.interscience.wiley.com/journal/114205207/abstract?CRETRY=1&SRETRY=0 which are for version 3.3, but PME will behav

[gmx-users] Gromacs 4 Scaling Benchmarks...

2008-11-09 Thread Yawar JQ
I was wondering if anyone could comment on these benchmark results for the d.dppc benchmark? Nodes Cutoff (ns/day) PME (ns/day) 4 1.331 0.797 8 2.564 1.497 16 4.5 1.92 32 8.308 0.575 64 13.5 0.275 128 20.093 - 192 21.6 - It seems to scale relatively well up to 32-64 nodes without PME. This se