I ran some aggregate bandwidth tests between 2 hosts connected by
both QDR InfiniBand and RoCE enabled 10 Gbps Mellanox cards.  The tests
measured the aggregate performance for 16 cores on one host communicating
with 16 on the second host.  I saw the same performance as with the QDR
InfiniBand alone, so it appears that the addition of the 10 Gbps RoCE cards
is
not helping.

     Should OpenMPI be using both in this case by default, or is there
something
I need to configure to allow for this?  I suppose this is the same question
as
how to make use of 2 identical IB connections on each node, or is the system
simply ignoring the 10 Gbps cards because they are the slower option.

     Any clarification on this would be helpful.  The only posts I've found
are very
old and discuss mostly channel bonding of 1 Gbps cards.

                     Dave Turner

-- 
Work:     davetur...@ksu.edu     (785) 532-7791
             118 Nichols Hall, Manhattan KS  66502
Home:    drdavetur...@gmail.com
              cell: (785) 770-5929

Reply via email to