Hi all,

I have been trying to setup a cluster with QLogic QLE7140 HCA's and a Cisco 
SFS-7000 24 port switch. The machines are running Debian Wheezy.

I have installed OpenMPI from the repos (1.4.5) and also
libibverbs1
libipathverbs1
libmthca1
librdmacm1
I have also tested OpenMPI compiled from the latest sources (1.6.4) with the 
same results. I modprobe rdma_ucm, ib_umad and ib_uverbs in order to get MPI 
jobs to run.

I'm not actually sure if what I've done is enough to correctly configure the 
network but I have tested several MPI capable codes that use Fortran and C, 
specifying the openib interface with the flag '--mca btl openib,self'. Things 
initially work and the bandwidth is as expected, however after anywhere from 4 
to 30 hours the jobs crash. The longest job that has completed successfully has 
gone for around 48 hours however they rarely make it past 4 hours. This is 
always the error message.

[[36446,1],2][../../../../../../ompi/mca/btl/openib/btl_openib_component.c:3238:handle_wc]
from host84 to: host85 error polling LP CQ with status RETRY EXCEEDED ERROR
status number 12 for wr_id 36085024 opcode 2  vendor error 0 qp_idx 3
--------------------------------------------------------------------------
The OpenFabrics stack has reported a network error event.  Open MPI
will try to continue, but your job may end up failing.

  Local host:        host85
  MPI process PID:   7912
  Error number:      10 (IBV_EVENT_PORT_ERR)

This error may indicate connectivity problems within the fabric;
please contact your system administrator.
--------------------------------------------------------------------------
The InfiniBand retry count between two MPI processes has been
exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

    The total number of times that the sender wishes the receiver to
    retry timeout, packet sequence, etc. errors before posting a
    completion error.

This error typically means that there is something awry within the
InfiniBand fabric itself.  You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.

Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:

* btl_openib_ib_retry_count - The number of times the sender will
  attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
  to 20).  The actual timeout value used is calculated as:

     4.096 microseconds * (2^btl_openib_ib_timeout)

  See the InfiniBand spec 1.2 (section 12.7.34) for more details.

Below is some information about the host that raised the error and the
peer to which it was connected:

  Local host:   host85
  Local device: qib0
  Peer host:    host84

You may need to consult with your system administrator to get this
problem fixed.
--------------------------------------------------------------------------

On a few occasions the machine that initiated the failure, (host85 in this 
example) crashed to the point of needing to be power cycled however most times, 
only Infiniband connectivity was lost after the crash. I have checked kernel 
and system logs and can't find anything at the time of the crash.

I have seen it recommended to use psm instead of openib for QLogic cards. Could 
this be part of the problem? After a lot of experimentation I am at a complete 
loss as to how to get psm up and running. If possible, could someone also help 
me understand which out of this list (ibibverbs, libipathverbs, libmthca, 
librdmacm, ib_mad, ib_umad, rdma_ucm, ib_uverbs, ib_qib) is the actual driver 
for my card and is there any way to configure the driver? This blog 
posthttp://swik.net/Debian/Planet+Debian/Julien+Blache%3A+QLogic+QLE73xx+InfiniBand+adapters,+QDR,+ib_qib,+OFED+1.5.2+and+Debian+Squeeze/e56if

seems to suggest that I will need to download the complete QLogic OFED stack 
and configure the driver which I've tried to do and failed on many 
counts.

I would be very grateful for any advice at this stage.

Best regards,

Vanja

Reply via email to