If you assign unique IP addresses to each container, you can then create a
hostfile that contains the IP addresses. Feed that to mpirun and it will work
just fine.
If you really want to do it under slurm, then slurm is going to need the list
of those IP addresses anyway. We read the slurm alloc
This is strange. I have a similar environment with one eth and one ipoib.
If I manually select the interface I want to use with TCP I get the
expected results.
Here is over IB:
mpirun -np 2 --mca btl tcp,self -host dancer00,dancer01 --mca
btl_tcp_if_include ib1 ./NPmpi
1: dancer01
0: dancer00
No
Thanks George,
I am selecting Ethernet device (em1) in mpirun script.
Here is ifconfig output:
em1 Link encap:Ethernet HWaddr E0:DB:55:FD:38:46
inet addr:10.30.10.121 Bcast:10.30.255.255 Mask:255.255.0.0
inet6 addr: fe80::e2db:55ff:fefd:3846/64 Scope:Link
UP
Look at your ifconfig output and select the Ethernet device (instead of the
IPoIB one). Traditionally the name lack any fanciness, most distributions
using eth0 as a default.
George.
On Tue, Sep 9, 2014 at 11:24 PM, Muhammad Ansar Javed <
muhammad.an...@seecs.edu.pk> wrote:
> Hi,
>
> I am cur
Hi,
I am currently conducting some testing on system with Gigabit and
InfiniBand interconnects. Both Latency and Bandwidth benchmarks are doing
well as expected on InfiniBand interconnects but Ethernet interconnect is
achieving very high performance from expectations. Ethernet and InfiniBand
both
Hello,
Has anyone tried to run MPI-aware programs inside Docker.io containers?
We are trying to setup an HPC cluster with slurm and Docker as main
components. While running simple programs looks doable, we do not
really know what are the required steps to run open-MPI programs.
Thanks,
Adrien