On Feb 24, 2006, at 10:11 PM, Allan Menezes wrote:

  I have a 16 node AMD/P4 machine cluster running Oscar 4.2.1 Beta and
FC4. Each machine has two Gigabit network cards. One being realtek8169
all connected to a netgear GS116 gigabit switch with max MTU =1500 and
the other NIC being aDlink Syskonnect chipset gigabit card connected to
a managed NetgearGS724T Gigabit switch with Jumbo MTU enabled on the
switch and each Dlink card's MTU set at 9000 from the ifcfg-eth1 file. I want to know how I can use open mpi (any version >=1.01) to use ethernet bonding to get 2 gig eth cards to boost performance. I have 512MB memory on each machine for a total of 8 GigBytes Memory and a hard disk on each node.. I use HPL/Linpack to run benchmarks and get 28.36GFlops with open
mpi 1.1a1br9098 and with mpich2 1.03 i get 28.7Gflops using only one
ethernet (GIg Dlink) with jumbo frames on both benchmarks at MTU=9000. I
use -mca btl tcp on openmpi. The N = 26760 and NB= 120 for HPL.dat at
P=4 Q=4 for 16 processors for both benchmarks using openmpi and mpich2.
Can any one tell me if I will get a performance increasse > 29GFlops
using the two Switches and 2 GigEth Nic cards per node if I use Ethernet
Channel Boning in FC4?

It is highly unlikely that channel bonding will increase your performance. Given the unbalanced nature of your TCP stack, it is likely to actually hurt performance more than it helps. Channel bonding is really only a win for bandwidth. In most cases, it does not improve (and sometimes even hurts) latency. HPL doesn't really strain a network, and it's doubtful that the extra 10% (best case) bandwith is going to help that much (especially considering the fact that it can adversely effect latency).

Open MPI will try to use all available TCP links between hosts, so it was already striping messages across both your NICs (assuming they were configured at the time) round-robin short messages where appropriate. The algorithm is fairly simplistic at this point and assumes that all network links are created equal.

Brian


--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/


Reply via email to