You will get similar results with hosts_ib and hosts_eth If you want to use tcp over ethernet, you have to mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include eth0 ... If you want to use tcp over ib, then mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include ib0 ...
Keep in mind that IMB calls MPI_Init_thread(MPI_THREAD_MULTIPLE) this is not only unnecessary here, but it also has an impact on performances (with older versions, Open MPI felt back on IPoIB, with v2.1rc the impact should be minimal) If you simply mpirun --mca btl tcp,self,sm ... then Open MPI will multiplex messages on both ethernet and IPoIB Cheers, Gilles Rodrigo Escobar <rodave...@gmail.com> wrote: >Hi, > >I have trying to run the Intel IMB benchmarks to compare the performance of >Infiniband (IB) vs Ethernet. However, I am not seeing any difference in >performance even for communication intensive benchmarks, such as alltoallv. > > >Each one of my machines has one ethernet interface and an infiniband >interface. I use the following command to run the alltoallv benchmark: > >mpirun --mca btl self,openib,sm -hostfile hosts_ib IMB-MPI1 alltoallv > > >The hosts_ib file contains the IP addresses of the infiniband interfaces, but >the performance is the same when I deactivate the IB interfaces and use my >hosts_eth file which has the IP addresses of the ethernet interfaces. Am I >missing something? What is really happening when I specify the openib btl if I >am using the ethernet network? > > >Thanks >
_______________________________________________ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users