On 10/16/2015 02:27 PM, Shamis, Pavel wrote:
Well, OMPI will see this as a 14 separate devices and will create ~28 openib
btl instances (one per each port).
Can you try to limit OpenMPI to run with a single device/port and see what
happens ?
We are running inside an LXC container and only 1 ib interface shows up inside
of it. We get the same behavior inside and outside a container.
I will retest with no sriov and outside of a container. And report back.
Thanks,
John
Best,
Pasha
From: users <users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org>> on behalf of John Marshall
<john.marsh...@ssc-spc.gc.ca <mailto:john.marsh...@ssc-spc.gc.ca>>
Reply-To: Open Users <us...@open-mpi.org <mailto:us...@open-mpi.org>>
Date: Friday, October 16, 2015 2:16 PM
To: Open Users <us...@open-mpi.org <mailto:us...@open-mpi.org>>
Subject: Re: [OMPI users] openib issue with 1.6.5 but not later releases
On 10/16/2015 01:35 PM, Shamis, Pavel wrote:
Did you try to run ibdiagnet to check the network ?
Also, how many devices you have on the same node ?
It say "mlx4_14" - do you have 14 HCA on the same machine ?!
Yes. ibdiagnet seems to check out fine except for a few warning which do
not seem to be consequential (e.g., more recent firmware available).
There is a single card with two ports but many interfaces (16/port,
but we are using only 1 port). We are using SRIOV.
John
Best,
Pavel (Pasha) Shamis
---
Computer Science Research Group
Computer Science and Math Division
Oak Ridge National Laboratory
On Oct 16, 2015, at 10:26 AM, John Marshall <john.marsh...@ssc-spc.gc.ca
<mailto:john.marsh...@ssc-spc.gc.ca>> wrote:
Hi,
I have encountered a problem when running with 1.6.5 over IB (openib,
ConnectX-3):
[[51298,1],2][btl_openib_component.c:3496:handle_wc] from
ib7-bc2qq42-be01p02 to: 3 error polling LP CQ with status RETRY EXCEEDED ERROR
status number 12 for wr_id 217ce00 opcode 0 vendor error 129 qp_idx 0
--------------------------------------------------------------------------
The InfiniBand retry count between two MPI processes has been
exceeded. "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):
The total number of times that the sender wishes the receiver to
retry timeout, packet sequence, etc. errors before posting a
completion error.
This error typically means that there is something awry within the
InfiniBand fabric itself. You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.
Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:
* btl_openib_ib_retry_count - The number of times the sender will
attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
to 20). The actual timeout value used is calculated as:
4.096 microseconds * (2^btl_openib_ib_timeout)
See the InfiniBand spec 1.2 (section 12.7.34) for more details.
Below is some information about the host that raised the error and the
peer to which it was connected:
Local host: ib7-bc2qq42-be01p02
Local device: mlx4_14
Peer host: 3
You may need to consult with your system administrator to get this
problem fixed.
--------------------------------------------------------------------------
[[51298,1],0][btl_openib_component.c:3496:handle_wc] from
ib7-bc2qq42-be02p02 to: 1 error polling LP CQ with status RETRY EXCEEDED ERROR
status number 12 for wr_id 15a4e00 opcode 10979 vendor error 129 qp_idx 0
--------------------------------------------------------------------------
mpirun has exited due to process rank 2 with PID 534 on
node 2 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[ib7-bc2qq42-be02p02:01438] 1 more process has sent help message
help-mpi-btl-openib.txt / pp retry exceeded
* We are using logical names for our targets (which explains Peer host: 3
above).
This is reproducible with a simple program which does send+recv around a ring
and calls
barrier before each iteration. The problem occurs at the barrier.
When I search for details on what this problem is, all I can find are
suggestions that this is
hardware (cabling) related. Our network guys have checked and everything
appears to be
set up correctly.
But, when I run the same program build with 1.8.8 and 1.10.0 on the same
system, the
problem does not occur (at least for the parameters I am using).
Also, when running with 1.6.5 using IB on another system (openib, ConnectX ), I
do _not_
encounter the problem.
Does anyone have some insight into what might be going on? Should I really be
looking more
into the hardware? I could begin migrating to >1.6.5, but I am concerned that
the problem
would just pop up somewhere else.
Thanks,
John
_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/10/27881.php
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this
post:http://www.open-mpi.org/community/lists/users/2015/10/27882.php
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/10/27884.php