Do you, perchance, have multiple TCP interfaces on at least one of the
nodes you're running on?

We had a mistake in the TCP network matching code during startup -- this
is fixed in v1.0.2.  Can you give that a whirl?


> -----Original Message-----
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Jeffrey B. Layton
> Sent: Tuesday, April 11, 2006 11:25 AM
> To: Open MPI Users
> Subject: [OMPI users] Problem running code with OpenMPI-1.0.1
> 
> Good morning,
> 
>    I'm trying to run one of the NAS Parallel Benchmarks (bt) with
> OpenMPI-1.0.1 that was built with PGI 6.0. The code never
> starts (at least I don't see any output) until I kill the code. Then
> I get the following message:
> 
> [0,1,2][btl_tcp_endpoint.c:559:mca_btl_tcp_endpoint_complete_connect] 
> connect() failed with 
> errno=113[0,1,4][btl_tcp_endpoint.c:559:mca_btl_tcp_endpoint_c
> omplete_connect] 
> connect() failed with
> errno=113[0,1,8][btl_tcp_endpoint.c:559:mca_btl_tcp_endpoint_c
> omplete_connect] 
> connect() failed with errno=113mpirun: killing job...
> 
> Any ideas on this one?
> 
> Thanks!
> 
> Jeff
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 

Reply via email to