On 07/10/2012 05:31 PM, Jeff Squyres wrote:
+1. Also, not all Ethernet switches are created equal --
particularly commodity 1GB Ethernet switches.
I've seen plenty of crappy Ethernet switches rated for 1GB
that could not reach that speed when under load.
Are you perhaps belittling my dear $43
On 07/10/2012 03:54 PM, David Warren wrote:
Your problem may not be related to bandwidth. It may be latency or
division of the problem. We found significant improvements running wrf
and other atmospheric code (CFD) over IB. The problem was not so much
the amount of data communicated, but how long
+1. Also, not all Ethernet switches are created equal -- particularly
commodity 1GB Ethernet switches. I've seen plenty of crappy Ethernet switches
rated for 1GB that could not reach that speed when under load.
On Jul 10, 2012, at 10:47 AM, Ralph Castain wrote:
> I suspect it mostly reflect
Your problem may not be related to bandwidth. It may be latency or
division of the problem. We found significant improvements running wrf
and other atmospheric code (CFD) over IB. The problem was not so much
the amount of data communicated, but how long it takes to send it. Also,
is your model
Thanks for your answer.You are right.
I've tried upon 4 nodes with 6 processes and things are worst.
So do you suggest that unique thing to do is to order an infiniband switch or
is there a possibility to enhance
something by tuning mca parameters ?
De : Ral
I suspect it mostly reflects communication patterns. I don't know anything
about Saturne, but shared memory is a great deal faster than TCP, so the more
processes sharing a node the better. You may also be hitting some natural
boundary in your model - perhaps with 8 processes/node you wind up wi
Hi.
I have recently built a cluster upon a Dell PowerEdge Server with a Debian 6.0
OS. This server is composed of
4 system board of 2 processors of hexacores. So it gives 12 cores per system
board.
The boards are linked with a local Gbits switch.
In order to parallelize the software Code Sa