Rodrigo,
i do not understand what you mean by "deactivate my IB interfaces"
the hostfile is only used in the wire-up phase
(to keep things simple, mpirun does
ssh <hostname> orted
under the hood, and <hostname> is coming from your hostfile.
so bottom line
mpirun --mca btl openib,self,sm -hostfile hosts_eth ... (With IB
interfaces down)
mpirun --mca btl openib,self,sm -hostfile hosts_ib0 ...
are expected to have the same performance
since you have some Infiniband hardware, there are two options
- you built Open MPI with MXM support, in this case you do not use the
btl/openib, but pml/cm and mtl/mxm
if you want to force the btl/openib, you have to
mpirun --mca pml ob1 --mca btl openib,self,sm ...
- you did not build Open MPI with MXM support, in this case, btl/openib
is used for inter node communications,
and btl/sm is used for intra node communications.
if you want the performance numbers for tcp over ethernet, your command
line is
mpirun --mca btl tcp,self,sm --mca pml ob1 --mca btl_tcp_if_include eth0
-hostfile hosts_eth ...
Cheers,
Gilles
On 3/21/2017 2:07 AM, Rodrigo Escobar wrote:
Thanks Guilles for the quick reply. I think I am confused about what
the openib BTL specifies.
What am I doing when I run with the openib BTL but specify my eth
interface (...and deactivate my IB interfaces)?
Is not openib only for IB interfaces?
Am I using RDMA here?
These two commands give the same performance:
mpirun --mca btl openib,self,sm -hostfile hosts_eth ... (With IB
interfaces down)
mpirun --mca btl openib,self,sm -hostfile hosts_ib0 ...
Regards,
Rodrigo
On Mon, Mar 20, 2017 at 8:29 AM, Gilles Gouaillardet
<gilles.gouaillar...@gmail.com <mailto:gilles.gouaillar...@gmail.com>>
wrote:
You will get similar results with hosts_ib and hosts_eth
If you want to use tcp over ethernet, you have to
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include eth0 ...
If you want to use tcp over ib, then
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include ib0 ...
Keep in mind that IMB calls MPI_Init_thread(MPI_THREAD_MULTIPLE)
this is not only unnecessary here, but it also has an impact on
performances (with older versions, Open MPI felt back on IPoIB,
with v2.1rc the impact should be minimal)
If you simply
mpirun --mca btl tcp,self,sm ...
then Open MPI will multiplex messages on both ethernet and IPoIB
Cheers,
Gilles
Rodrigo Escobar <rodave...@gmail.com <mailto:rodave...@gmail.com>>
wrote:
Hi,
I have trying to run the Intel IMB benchmarks to compare the
performance of Infiniband (IB) vs Ethernet. However, I am not
seeing any difference in performance even for communication
intensive benchmarks, such as alltoallv.
Each one of my machines has one ethernet interface and an
infiniband interface. I use the following command to run the
alltoallv benchmark:
mpirun --mca btl self,openib,sm -hostfile hosts_ib IMB-MPI1
alltoallv
The hosts_ib file contains the IP addresses of the infiniband
interfaces, but the performance is the same when I deactivate the
IB interfaces and use my hosts_eth file which has the IP addresses
of the ethernet interfaces. Am I missing something? What is really
happening when I specify the openib btl if I am using the ethernet
network?
Thanks
_______________________________________________
users mailing list
users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
<https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users