See https://github.com/open-mpi/ompi/pull/1439
I was seeing this problem when enabling CUDA support as it sets
btl_openib_max_send_size to 128k but does not change the receive queue
settings. Tested the commit in #1439 and it fixes the issue for me.
-Nathan
On Tue, Mar 08, 2016 at 03:57:39PM +0
This is a bug we need to deal with. If we are getting queue pair
settings from an ini file and the max_send_size if the default value we
should set the max send size to the size of the largest queue pair. I
will work on a fix.
-Nathan
On Tue, Mar 08, 2016 at 03:57:39PM +0900, Gilles Gouaillardet
Per the error message, can you try to
mpirun --mca btl_openib_if_include cxgb3_0 --mca
btl_openib_max_send_size 65536 ...
and see whether it helps ?
you can also try various settings for the receive queue, for example
edit your /.../share/openmpi/mca-btl-openib-device-params.ini and set
the
Hello all
I am asking for help for the following situation:
I have two (mostly identical) nodes. Each of them have (completely
identical)
1. qlogic 4x DDR infiniband, AND
2. Chelsio S310E (T3 chip based) 10GE iWARP cards.
Both are connected back-to-back, without a switch. The connection is
physi