In doing testing with IMB, I find that running a 4200+ core case with the IMB 
test Alltoall, and message lengths of 16..1024 bytes (as per -msglog 4:10 IMB 
option), it fails with:

--------------------------------------------------------------------------
A process failed to create a queue pair. This usually means either
the device has run out of queue pairs (too many connections) or
there are insufficient resources available to allocate a queue pair
(out of memory). The latter can happen if either 1) insufficient
memory is available, or 2) no more physical memory can be registered
with the device.

For more information on memory registration see the Open MPI FAQs at:
http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages

Local host:             node7106
Local device:           mlx4_0
Queue pair type:        Reliable connected (RC)
--------------------------------------------------------------------------
[node7106][[51922,1],0][connect/btl_openib_connect_oob.c:867:rml_recv_cb] error 
in endpoint reply start connect
[node7106:06503] [[51922,0],0]-[[51922,1],0] mca_oob_tcp_msg_recv: readv 
failed: Connection reset by peer (104)
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 6504 on
node node7106 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

Yes, these are ALL of the error messages.  I did not get a message about not 
being able to register enough memory.   I verified that log_num_mtt = 24  and 
log_mtts_per_seg = 0 (via catting of their files in 
/sys/module/mlx4_core/parameters and what is set in 
/etc/modprobe.d/mlx4_core.conf).  While such a large-scale job runs, I run 
'vmstat 10' to examine memory usage, but there appears to be a good amount of 
memory still available and swap is never used.   In terms of settings in 
/etc/security/limits.conf:

* soft memlock  unlimited
* hard memlock  unlimited
* soft stack 300000
* hard stack unlimited

I don't know if btl_openib_connect_oob.c or mca_oob_tcp_msg_recv are clues, but 
I am now at a loss as to where the problem lies.

This is for an application using OpenMPI 1.6.5, and the systems have Mellanox 
OFED 3.1.1 installed.

--john

Reply via email to