Dear Jeff,
Thanks for the information and helping me out. I too delayed replying, I
wanted to test this but the cluster here is down. I will check it and let
you know in case it doesn't work.
Thanks
Bibrak Qamar
On Sat, May 24, 2014 at 5:23 AM, Jeff Squyres (jsquyres) wrote:
> I am sorry for
I am sorry for the delay in replying; this week got a bit crazy on me.
I'm guessing that Open MPI is striping across both your eth0 and ib0 interfaces.
You can limit which interfaces it uses with the btl_tcp_if_include MCA param.
For example:
# Just use eth0
mpirun --mca btl tcp,sm,sel
Here the output of ifconfig
*-bash-3.2$ ssh compute-0-15 /sbin/ifconfig*
eth0 Link encap:Ethernet HWaddr 78:E7:D1:61:C6:F4
inet addr:10.1.255.239 Bcast:10.1.255.255 Mask:255.255.0.0
inet6 addr: fe80::7ae7:d1ff:fe61:c6f4/64 Scope:Link
UP BROADCAST RUNNING MULTI
Can you send the output of ifconfig on both compute-0-15.local and
compute-0-16.local?
On May 22, 2014, at 3:30 AM, Bibrak Qamar wrote:
> Hi,
>
> I am facing problem in running Open MPI using TCP (on 1G Ethernet). In
> practice the bandwidth must not exceed 1000 Mbps but for some data points
Hi,
I am facing problem in running Open MPI using TCP (on 1G Ethernet). In
practice the bandwidth must not exceed 1000 Mbps but for some data points
(for point-to-point ping pong) it exceeds this limit. I checked with MPICH
it works as desired.
Following is the command I issue to run my program o