Hello everyone,

I am trying to get the maximum performance out of my two node testing setup. Each node consists of 4 Sandy Bridge CPUs and each CPU has one directly attached Mellanox QDR card. Both nodes are connected via a 8-port Mellanox switch. So far I found no option that allows binding mpi-ranks to a specific card, as it is available in MVAPICH2. Is there a way to change the round robin behavior of openMPI?
Maybe something like "btl_tcp_if_seq" that I have missed?


Kind regards,
Tobias

Reply via email to