On Mon, 14 Nov 2005 10:38:03 -0700, Troy Telford <ttelf...@linuxnetworx.com> wrote:

My mvapi config is using the Mellanox IB Gold 1.8 IB software release.
Kernel 2.6.5-7.201 (SLES 9 SP2)

When I ran IMB using mvapi, I received the following error:
***
[0,1,2][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error
in pod
[0,1,3][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error
in pod
[0,1,2][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error
in pod
***

Execution (for the mvapi test) is started with:
mpirun --prefix $MPI_HOME --mca btl mvapi,self -np 8 -machinefile
$work_dir/node.gen1 $work_dir/IMB-MPI1

A few clarifications:  here's the output, by program:

Error when Executing Presta's 'com' test on MVAPI:
[0,1,1][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] [0,1,0][btld
error in posting pending send

Error for the 'allred' rest:
[btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error in posting pending send [0,1,5][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error in posting pending send [0,1,1][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error in posting pending send [0,1,6][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error in posting pending send

For 'Globalop':
[0,1,2][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error in posting pending send
[n54:12267] *** An error occurred in MPI_Reduce
[n54:12267] *** on communicator MPI_COMM_WORLD
[n54:12267] *** MPI_ERR_OTHER: known error not in list
[n54:12267] *** MPI_ERRORS_ARE_FATAL (goodbye)

For IMB:
[0,1,3][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress]
[0,1,2][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error in posting pending send
error in posting pending send
[0,1,3][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] error in posting pending send

mvapi did run HPL successfully, but it hasn't finished running HPCC just yet.

Also, I can say that I've been successful in running HPL and HPCC over GM (in fact, I've been able to run IMB, Presta, HPCC, and HPL with no issues using GM. This pleases me)

I've just finished a build of RC7, so I'll go give that a whirl and report.
--
Troy Telford
Linux Networx
ttelf...@linuxnetworx.com
(801) 649-1356

Reply via email to