Hello everyone,
OpenMPI crashes when doing parallel HDF5 on both NFS and Panasas systems:
On NFS, we are getting:
ADIOI_Set_lock:: No locks available
ADIOI_Set_lock:offset 69744, length 256
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 124
File locking failed in ADIOI_Set_lock(fd 25,
On Apr 25, 2013, at 9:50 AM, W Spector wrote:
> I just downloaded 1.7.1. The new files in the use-mpi-f08 look great!
>
> However the use-mpi-tkr and use-mpi-ignore-tkr directories don't fare so
> well. Literally all the interfaces are still 'ierr'.
Oy. I probably should have realized that
On Apr 25, 2013, at 10:52 PM, W Spector wrote:
> I tried building 1.7.1 on my Ubuntu system. The default gfortran is v4.6.3,
> so configure won't enable the mpi_f08 module build. I also tried a three
> week old snapshot of the gfortran 4.9 trunk. This has Tobias's new TYPE(*)
> in it, but n
Hi Jeff,
To take care of the ierr->ierror conversion, simply do the following:
cd openmpi-1.7.1/ompi/mpi/fortran/use-mpi-tkr/scripts
ls -1 *.sh | xargs -i -t ex -c ":1,\$s?ierr?ierror?" -c ":wq" {}
Then go up a level to openmpi-1.7.1/ompi/mpi/fortran/use-mpi-tk and use:
cd ..
ls -1 for
I committed that part; thanks.
On Apr 26, 2013, at 5:51 PM, W Spector wrote:
> Hi Jeff,
>
> To take care of the ierr->ierror conversion, simply do the following:
>
> cd openmpi-1.7.1/ompi/mpi/fortran/use-mpi-tkr/scripts
> ls -1 *.sh | xargs -i -t ex -c ":1,\$s?ierr?ierror?" -c ":wq" {}
>
>
Hi,
I have encountered really bad performance when all the nodes send data
to all the other nodes. I use Isend and Irecv with multiple
outstanding sends per node. I debugged the behavior and came to the
following conclusion: It seems that one sender locks out all other
senders for one receiver. Th