Re: [OMPI users] openmpi 1.4.1 and xgrid

2010-05-03 Thread Jeff Squyres
On Apr 30, 2010, at 7:12 PM, Ralph Castain wrote: > I build it on Mac 10.6 every time we do an update to the 1.4 series, without > problem. --without-xgrid or --with-xgrid=no should both work just fine (I use > the latter myself). Ditto. I just downloaded 1.4.1 and tried it on my 10.6 mbp and

Re: [OMPI users] openmpi 1.4.1 and xgrid

2010-05-03 Thread Alan
Thanks a lot Jeff, you described exactly my problem (mistake indeed) and now things are working fine. Sorry for much ado for nothing. Cheers, Alan On Mon, May 3, 2010 at 14:57, Jeff Squyres wrote: > On Apr 30, 2010, at 7:12 PM, Ralph Castain wrote: > > > I build it on Mac 10.6 every time we d

Re: [OMPI users] Can compute, but can not output files

2010-05-03 Thread Jeff Squyres
On Apr 30, 2010, at 10:36 PM, JiangjunZheng wrote: > I am using Rocks+openmpi+hdf5+pvfs2. The soft on the rocks+pvfs2 cluster will > output hdf5 files after computing. However, when the output starts, it shows > errors: > [root@nanohv pvfs2]# ./hdf5_mpio DH-ey-001400.20.h5 > Testing simple C MPI

[OMPI users] MPI_Comm_set_errhandler: error in Fortran90 Interface mpi.mod

2010-05-03 Thread Paul Kapinos
Hello OpenMPI / Sun/Oracle MPI folks, we believe that the OpenMPI and SunMPI (Cluster Tools) has an error in the Fortran-90 (f90) bindings of the MPI_Comm_set_errhandler routine. Tested MPI versions: OpenMPI/1.3.3 and Cluster Tools 8.2.1 Consider the attached example. This file uses the "USE

Re: [OMPI users] Can compute, but can not output files

2010-05-03 Thread Mohamad Chaarawi
One thing to check for is that you specified the cflags/ldflags/libs for pvfs2 when u configured OMPI: that's what i do to get ompi to work over pvfs2 on our cluster: ./configure CFLAGS=-I/path-to-pvfs2/include/ LDFLAGS=-L/path-to-pvfs2/lib/ LIBS="-lpvfs2 -lpthread" --with-wrapper-cflags=-I/path-

[OMPI users] MPIError:MPI_Recv: MPI_ERR_TRUNCATE:

2010-05-03 Thread Pooja Varshneya
Hi All, I have written a program where MPI master sends and receives large amount of data i.e sending from 1KB to 1MB of data. The amount of data to be sent with each call is different The program runs well when running with 5 slaves, but when i try to run the same program with 9 slaves, it

Re: [OMPI users] MPI_Comm_set_errhandler: error in Fortran90 Interfacempi.mod

2010-05-03 Thread Jeff Squyres
Paul -- Most excellent; thanks for the diagnosis and the reproducer. You are absolutely correct that we have a bug in the F90 interface in MPI_COMM_SET_ERRHANDLER and MPI_WIN_SET_ERRHANDLER. The INTENT for the communicator parameter was mistakenly set to INOUT instead of just IN, meaning tha