Re: [OMPI users] mca_pml_ob1_send blocks

2009-08-25 Thread Shaun Jackman
Jeff Squyres wrote: On Aug 24, 2009, at 2:18 PM, Shaun Jackman wrote: I'm seeing MPI_Send block in mca_pml_ob1_send. The packet is shorter than the eager transmit limit for shared memory (3300 bytes < 4096 bytes). I'm trying to determine if MPI_Send is blocking due to a deadlock. Will MPI_Send

Re: [OMPI users] gfortran, gcc4.2, openmpi 1.3.3, fortran compile errors

2009-08-25 Thread Jeff Squyres
On Aug 25, 2009, at 1:16 PM, Jason Palmer wrote: I seem to have fixed the problem using the miracle of LD_LIBRARY_PATH. I probably should have known about the importance of that environment variable already, and I imagine no knowing about it has caused me problems in the past. Yes, this

Re: [OMPI users] gfortran, gcc4.2, openmpi 1.3.3, fortran compile errors

2009-08-25 Thread Jason Palmer
I seem to have fixed the problem using the miracle of LD_LIBRARY_PATH. I probably should have known about the importance of that environment variable already, and I imagine no knowing about it has caused me problems in the past. So besides the important environment variables listed in the openm

[OMPI users] explicit routing for multiple network interfaces

2009-08-25 Thread Jayanta Roy
Hi, I am using Openmpi (version 1.2.2) for MPI data transfer using non-blocking MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer within 48 nodes. Now I have added eth2 and eth3 links on the 32 compute nodes. My aim

Re: [OMPI users] gfortran, gcc4.2, openmpi 1.3.3, fortran compile errors

2009-08-25 Thread Jayanta Roy
Hi, I am using Openmpi (version 1.2.2) for MPI data transfer using non-blocking MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer within 48 nodes. Now I have added eth2 and eth3 links on the 32 compute nodes. My aim

Re: [OMPI users] mca_pml_ob1_send blocks

2009-08-25 Thread Jeff Squyres
On Aug 24, 2009, at 2:18 PM, Shaun Jackman wrote: I'm seeing MPI_Send block in mca_pml_ob1_send. The packet is shorter than the eager transmit limit for shared memory (3300 bytes < 4096 bytes). I'm trying to determine if MPI_Send is blocking due to a deadlock. Will MPI_Send block even when sendi

[OMPI users] gfortran, gcc4.2, openmpi 1.3.3, fortran compile errors

2009-08-25 Thread Jason Palmer
Hi, I'm trying to build openmpi with gcc4.2. I built gcc with thread support in order to use OpenMP. I have been able to compile and run a threaded OpenMP program with gfortran from gcc4.2, so the gfortran program itself seems to be working. However, when I try to configure OpenMPI 1.3.3, set

[OMPI users] Need help with tuning of IB for OpenMPI 1.3.3

2009-08-25 Thread Ake Sandgren
Hi! We have one user code that is having lots of problems with RNRs or sometimes hangs. (The same code runs ok on another IB based system which has full connectivity and on our Myrinet system) The IB network has a 7:3 overload, i.e. 7 nodes per 3 IB links up to the main Cisco switch. In other wor