Hi,
In between two nodes I have dual Gigabit ethernet full duplex links. I was
doing benchmarking using non-blocking MPI send and receive. But I am
getting only speed corresponds to one Gigabit ethernet full duplex link
(< 2Gbps). I have checked using ifconfig, this transfer is using both the
Jayanta,
What is your bus on this machine? If it is PCI-X 133 you are going to be
limited, also memory bandwidth could also be the bottleneck.
Thanks,
Galen
Jayanta Roy wrote:
Hi,
In between two nodes I have dual Gigabit ethernet full duplex links. I was
doing benchmarking using non-blo
I am also guessing you might be actually using only one of the gigabit links
even though you have two available. I assume you have configured the
equal-cost-multi-path (ECMP) IP routes between the two hosts correctly; even
then, ECMP, as implemented in most IP stacks (not sure if there is an RFC
f
Hi Galen,
The GigE is on the ESB2. This lives on a 4GB/sec link to the MCH.
I believe we aren't really running close to the I/O bandwidth limit.
This MPI transfer uses both the ports as there are increment in RX and TX bytes
of both eth0 and eth1. But I am getting the same bandwidth with or wit
HI,
I am trying to dynamically load mpi.dylib on Mac OS X. It seems to
load fine, but when I call MPI_Init(), I get the error shown below. I
can call other functions jsut fine (like MPI_Initialized).
Also, my mpi install is seeing all the needed components and I can
load them myself without er
HI,
I am trying to dynamically load mpi.dylib on Mac OS X (using ctypes in
python). It seems to
load fine, but when I call MPI_Init(), I get the error shown below. I
can call other functions just fine (like MPI_Initialized).
Also, my mpi install is seeing all the needed components and I can