[OMPI users] MPI_File_open return error code 16

2009-08-26 Thread Changsheng Jiang
Hi List, I am learning MPI. A small code snippet try to open a file by MPI_File_open gets error 16(Internal error code.) in a single server with OpenMPI 1.3.3, but it's run correctly in another server(with OpenMPI 1.3.2). How to fix this problem? Thanks. This is the snippet: int main(int argc,

Re: [OMPI users] Problem with repeatedly spawning a few processes

2009-08-26 Thread Ralph Castain
This is a known issue - I'll test to see if it has been fixed for the upcoming 1.3.4. We know the problem does not exist in our devel trunk, but I don't know if the fix propagated to the 1.3 branch. On Aug 26, 2009, at 3:40 PM, Tim Miller wrote: Hello Everyone, I have a problem that I can

[OMPI users] Problem with repeatedly spawning a few processes

2009-08-26 Thread Tim Miller
Hello Everyone, I have a problem that I can't seem to figure out from searching the mailing list archive. I have a code that repeatedly spawns (via MPI_COMM_SPAWN) a group of 8 processes and then waits for them to finish. The problem is that OpenMPI (I've tried 1.3.1 and 1.3.3) opens a pipe each t

Re: [OMPI users] Using OPENMPI configured for MX, GM and OPENIB interconnects

2009-08-26 Thread Scott Atchley
On Aug 26, 2009, at 4:20 PM, twu...@goodyear.com wrote: I see. My one script for all clusters calls mpirun --mca btl openib,mx,gm,tcp,sm,self so I'd need to add some logic above the mpirun line to figure out what cluster I am on to setup the correct mpirun line. still seems like I shou

Re: [OMPI users] Using OPENMPI configured for MX, GM and OPENIB interconnects

2009-08-26 Thread twurgl
I see. My one script for all clusters calls mpirun --mca btl openib,mx,gm,tcp,sm,self so I'd need to add some logic above the mpirun line to figure out what cluster I am on to setup the correct mpirun line. still seems like I should be able to do the mpirun line I have and just tell me wh

Re: [OMPI users] Using OPENMPI configured for MX, GM and OPENIB interconnects

2009-08-26 Thread Scott Atchley
On Aug 26, 2009, at 3:41 PM, twu...@goodyear.com wrote: When, for example, I run on an IB cluster, I get warning messages about not finding GM NICS and another transport will be used etc. And warnings about mca btl mx components not found etc. It DOES run the IB, but it never says that in

[OMPI users] Using OPENMPI configured for MX, GM and OPENIB interconnects

2009-08-26 Thread twurgl
I configure openmpi (1.3.3 and previous ones as well) to be able to have an executable able to run on any cluster we have. I used: ./configure --with-mx --with-openib --with-gm At the end of the day, the same executable does run on any of the clusters. The question I have is: When, for

Re: [OMPI users] mca_pml_ob1_send blocks

2009-08-26 Thread Jeff Squyres
On Aug 26, 2009, at 10:38 AM, Jeff Squyres (jsquyres) wrote: Yes, this could cause blocking. Specifically, the receiver may not advance any other senders until the matching Irecv is posted and is able to make progress. I should clarify something else here -- for long messages where the pi

Re: [OMPI users] mca_pml_ob1_send blocks

2009-08-26 Thread Jeff Squyres
On Aug 25, 2009, at 6:51 PM, Shaun Jackman wrote: The receiver posts a single MPI_Irecv in advance, and as soon as it's received a message it posts a new MPI_Irecv. However, there are multiple processes sending to the receiver, and only one MPI_Irecv posted. Yes, this could cause blocking