Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
runs.
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile host
Did you try channel bonding? If your OS is Linux, there are plenty of
"howto" on the internet which will tell you how to do it.
However, your CPU might be the bottleneck in this case. How much of CPU
horsepower is available at 140MB/s?
If the CPU *is* the bottleneck, changing your network driver
Hello,
On 10/23/06, Jayanta Roy wrote:
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
runs.
Thats impressive, since its _more_ than the threotetical limit of 1Gb
ethernet.
140MB = 140
Hi,
I have tried with lamboot with a host file where odd-even nodes will talk
within themselves using eth0 and talk across them using eth1. So my
transfer runs @ 230MB/s at starting. But after few transfers rate falls
down to ~130MB/s and after long run finally comes to ~54MB/s. Why this
type
I don't know what your bandwidth tester look like, but 140MB/s it's
way too much for a single Gige card, except if it's a bidirectional
bandwidth. Usually, on a new generation Gige card (Broadcom
Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express) with a
AMD processor (AMD Athlon(tm
On 10/23/06, Jayanta Roy wrote:
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
runs.
Can you please tell me how are you measuring transfer rates? I mean,
Can you show us a snipet of the code y
Hi George,
Yes, it is duplex BW. The BW benchmark is a simple timing call around
MPI_Alltoall call. Then you estimate the network traffic from the sending
buffer size and get the rate.
Regards,
Jayanta
On Mon, 23 Oct 2006, George Bosilca wrote:
I don't know what your bandwidth tester look
What I think is happening is this:
The initial transfer rate you are seeing is the burst rate; after a long
time average, your sustained transfer rate emerges. Like George said, you
should use a proven tool to measure your bandwidth. We use netperf, a
freeware from HP.
That said, the ethernet te
We manage to get 900+ Mbps on a broadcom, 570x chip. We run jumbo
frames and use a force10 switch. This is with also openmpi-1.0.2
(have not tried rebuilding netpipe with 1.1.2) Also see great
results with netpipe (mpi) on infiniband. Great work so far guys.
120: 6291459 bytes 3
A couple of comments regarding issues raised by this thread.
1) In my opinion Netpipe is not such a great network benchmarking tool for
HPC applications. It measures timings based on the completion of the send
call on the transmitter not the completion of the receive. Thus, if there is
a delay in
On 10/23/06, Tony Ladd wrote:
A couple of comments regarding issues raised by this thread.
1) In my opinion Netpipe is not such a great network benchmarking tool for
HPC applications. It measures timings based on the completion of the send
call on the transmitter not the completion of the recei
--- buildrpm.sh 2006-10-23 17:59:33.729764603 -0400
+++ buildrpm-fixed.sh 2006-10-23 17:58:33.145635240 -0400
@@ -11,6 +11,7 @@
#
prefix="/opt/openmpi"
+#/1.1.2/pgi"
specfile="openmpi.spec"
rpmbuild_options="--define 'mflags -j4'"
configure_options=
@@ -22,10 +23,10 @@
# Some distro's will
Committed -- thanks!
On Oct 23, 2006, at 7:14 PM, Joe Landman wrote:
--- buildrpm.sh 2006-10-23 17:59:33.729764603 -0400
+++ buildrpm-fixed.sh 2006-10-23 17:58:33.145635240 -0400
@@ -11,6 +11,7 @@
#
prefix="/opt/openmpi"
+#/1.1.2/pgi"
specfile="openmpi.spec"
rpmbuild_options="--define 'm
13 matches
Mail list logo