>>> You sound like our vendors, "what is your app"  
>> 
>> ;-) I used to be one.
>> 
>> Ideally OMPI should do the switch between MXM/RC/XRC internally in the 
>> transport layer. Unfortunately,
>> we don't have such smart selection logic. Hopefully IB vendors will fix some 
>> day. 
> 
> I actually looked in the openib-hca.ini (working from memory) to try and find 
> what the default queues were, and I actually couldn't figure it out. The 
> ConnectX entry doesn't have a default, and the 'default default'  also 
> doesn't have an entry. 
> 
> I need to dig into ompi_info, got distracted by an intel compiler bug, ADD 
> for admin/user support folks.

"default default" QP configuration is tuned for Mellanox devices. 
Therefore the openib-hca.ini file doesn't have a special configuration for 
Connect-X.
The proper way to check default configuration is ompi_info utility.

> 
>> 
>>> 
>>> Note most of our users run just fine with the standard Peer-Peer queues, 
>>> default out the box OpenMPI.
>> 
>> The P2P queue is fine, but most like using XRC your users will observe 
>> better performance. This is not just scalability.
> 
> Cool thanks for all the input, I wonder why peer-to-peer is the default, I 
> know XRC requires hardware support, 

There is a historical reason behind this. OpenFabrics decided not to include 
XRC transport in default distribution. The XRC feature was only available as a 
part of
the Mellanox OFED distribution. I think, recently OFA community decided to 
include XRC, so we actually should consider to enable XRC by default.

-Pasha

Reply via email to