Re: [OMPI users] Problem with 1.3.2 - need tips on debugging

2009-05-29 Thread Ralph Castain
You have version confusion somewhere - the error message indicates that mpirun is looking for a component that only exists in the 1.2.x series, not in 1.3.x. Check that your LD_LIBRARY_PATH is pointing to the 1.3.2 location, along with your PATH. On Fri, May 29, 2009 at 12:52 PM, Jeff Layton wr

Re: [OMPI users] Problem with 1.3.2 - need tips on debugging

2009-05-29 Thread Jeff Layton
I've got some more information (after rebuilding Open MPI and the application a few times). I put -mca mpi_show_mca_params enviro in my mpirun line to get some of the parameter information back. I get the following information back (warning - it's long). ---

Re: [OMPI users] "An error occurred in MPI_Recv" with more than 2 CPU

2009-05-29 Thread Eugene Loh
vasilis wrote: The original issue, still reflected by the subject heading of this e-mail, was that a message overran its receive buffer. That was fixed by using tags to distinguish different kinds of messages (res, jacob, row, and col). I thought the next problem was the small (10^-

[OMPI users] Problem with 1.3.2 - need tips on debugging

2009-05-29 Thread Jeff Layton
Good morning, I just built 1.3.2 on a ROCKS 4.something system. I built my code (CFL3D) with the Intel 10.1 compilers. I also linked in the OpenMPI libs and the Intel libraries to make sure I had the paths correct. When I try running my code, I get the following, error: executing task of job 29

Re: [OMPI users] "An error occurred in MPI_Recv" with more than 2 CPU

2009-05-29 Thread vasilis
> The original issue, still reflected by the subject heading of this e-mail, > was that a message overran its receive buffer. That was fixed by using > tags to distinguish different kinds of messages (res, jacob, row, and col). > > I thought the next problem was the small (10^-10) variations in

Re: [OMPI users] How to use Multiple links with OpenMPI??????????????????

2009-05-29 Thread shan axida
Hi Mr. Jeff Squyres, Is it true to use bidirectianal communication with MPI in ethernet Cluster? I have tried once (I thought, it is possible because of fully duplex swithes). However, I could not get bandwidth improvement as I was expecting. If you answer is YES, would you please tell me about p