We have a large fortran application designed to run doing IO with either
mpi_io or fortran direct access.  On a linux workstation (16 AMD cores)
running openmpi 1.5.3 and Intel fortran 12.0 we are having trouble with
random failures with the mpi_io option which do not occur with
conventional fortran direct access.  We are using ext3 file systems, and
I have seen some references hinting of similar problems with the
ext3/mpiio combination.  The application with the mpi_io option runs
flawlessly on Cray architectures with Lustre file systems, so we are
also suspicious of the ext3/mpiio combination.  Does anyone else have
experience with this combination that could shed some light on the
problem, and hopefully some suggested solutions?

T. Rosmond



Reply via email to