[OMPI users] QLogic HCA random crash after prolonged use

2013-04-20 Thread Vanja Z
Hi all,

I have been trying to setup a cluster with QLogic QLE7140 HCA's and a Cisco 
SFS-7000 24 port switch. The machines are running Debian Wheezy.

I have installed OpenMPI from the repos (1.4.5) and also
libibverbs1
libipathverbs1
libmthca1
librdmacm1
I have also tested OpenMPI compiled from the latest sources (1.6.4) with the 
same results. I modprobe rdma_ucm, ib_umad and ib_uverbs in order to get MPI 
jobs to run.

I'm not actually sure if what I've done is enough to correctly configure the 
network but I have tested several MPI capable codes that use Fortran and C, 
specifying the openib interface with the flag '--mca btl openib,self'. Things 
initially work and the bandwidth is as expected, however after anywhere from 4 
to 30 hours the jobs crash. The longest job that has completed successfully has 
gone for around 48 hours however they rarely make it past 4 hours. This is 
always the error message.

[[36446,1],2][../../../../../../ompi/mca/btl/openib/btl_openib_component.c:3238:handle_wc]
from host84 to: host85 error polling LP CQ with status RETRY EXCEEDED ERROR
status number 12 for wr_id 36085024 opcode 2  vendor error 0 qp_idx 3
--
The OpenFabrics stack has reported a network error event.  Open MPI
will try to continue, but your job may end up failing.

  Local host:    host85
  MPI process PID:   7912
  Error number:  10 (IBV_EVENT_PORT_ERR)

This error may indicate connectivity problems within the fabric;
please contact your system administrator.
--
The InfiniBand retry count between two MPI processes has been
exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

    The total number of times that the sender wishes the receiver to
    retry timeout, packet sequence, etc. errors before posting a
    completion error.

This error typically means that there is something awry within the
InfiniBand fabric itself.  You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.

Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:

* btl_openib_ib_retry_count - The number of times the sender will
  attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
  to 20).  The actual timeout value used is calculated as:

 4.096 microseconds * (2^btl_openib_ib_timeout)

  See the InfiniBand spec 1.2 (section 12.7.34) for more details.

Below is some information about the host that raised the error and the
peer to which it was connected:

  Local host:   host85
  Local device: qib0
  Peer host:    host84

You may need to consult with your system administrator to get this
problem fixed.
--

On a few occasions the machine that initiated the failure, (host85 in this 
example) crashed to the point of needing to be power cycled however most times, 
only Infiniband connectivity was lost after the crash. I have checked kernel 
and system logs and can't find anything at the time of the crash.

I have seen it recommended to use psm instead of openib for QLogic cards. Could 
this be part of the problem? After a lot of experimentation I am at a complete 
loss as to how to get psm up and running. If possible, could someone also help 
me understand which out of this list (ibibverbs, libipathverbs, libmthca, 
librdmacm, ib_mad, ib_umad, rdma_ucm, ib_uverbs, ib_qib) is the actual driver 
for my card and is there any way to configure the driver? This blog 
posthttp://swik.net/Debian/Planet+Debian/Julien+Blache%3A+QLogic+QLE73xx+InfiniBand+adapters,+QDR,+ib_qib,+OFED+1.5.2+and+Debian+Squeeze/e56if

seems to suggest that I will need to download the complete QLogic OFED stack 
and configure the driver which I've tried to do and failed on many 
counts.

I would be very grateful for any advice at this stage.

Best regards,

Vanja



Re: [OMPI users] QLogic HCA random crash after prolonged use

2013-06-15 Thread Vanja Z
>>  I have seen it recommended to use psm instead of openib for QLogic cards.

> [Tom] 
> Yes.  PSM will perform better and be more stable when running OpenMPI than 
> using 
> verbs.  Intel has acquired the InfiniBand assets of QLogic about a year ago.  
> These SDR HCAs are no longer supported, but should still work.  You can get 
> the 
> driver (ib_qib) and PSM library from OFED 1.5.4.1 or the current release OFED 
> 3.5.
> 
> With the current OFED 3.5 release there are included psm-release notes which 
> start out this way (read down to the OpenMPI build instructions for PSM):

Thanks
 for the reply (and sorry for my late response). I had already tried 
compiling OpenMPI with the "--with-psm" flag. It compiles but doesn't 
seem to get me much closer to actually using psm.

I've found a software package(s) available from the Intel site,
http://www.intel.com/content/www/us/en/search.html?keyword=qlogic+ofed
It
 seems like installing these on a supported OS (RHEL5/6 and SLES 10/11) 
is the recommended method for using QLogic/Intel cards. I also found 
this very informative post by Julian Blache explaining how he got it all
 working on Debian Squeeze,
http://swik.net/Debian/Planet+Debian/Julien+Blache%3A+QLogic+QLE73xx+InfiniBand+adapters,+QDR,+ib_qib,+OFED+1.5.2+and+Debian+Squeeze/e56if
It
 seems like apart from building OpenMPI with the right flag there is 
also some configuration requiring at the very least a utility called 
iba_portconfig.sh and an openibd initscript. I have tried getting these 
utilities from various sources and I can't find a version that doesn't 
segfault on my machines (Debian Wheezy). It's also not clear to me what 
should come from the Debian repos and what should come from the Intel 
package including what to do about the kernel :S

The more I read 
online, the more it seems that these cards have absolutely no hope of 
operating stably. With a recent kernel upgrade I'm also getting a new 
MPI fork warning that some searching indicates is also connected to 
QLogic cards. I bought 24 of these cards a few months ago and it has 
turned into the biggest computer related nightmare I've ever 
experienced. I'm beginning to think I'm better off trying to sell them 
and buy an equivalent from Mellanox card (I have 2 Mellanox cards that I
 seem to work fine on Debian out of the box).

Have I got any chance of making these cards work on Debian Wheezy?