I have been a user of MPI-IO for 4+ years and have a code that has run correctly with MPICH, MPICH2, and OpenMPI 1.2.*

I recently upgraded to OpenMPI 1.3.1 and immediately noticed that my MPI-IO generated output files are corrupted. I have not yet had a chance to debug this in detail, but it appears that MPI_File_write_all() commands are not placing information correctly on their file_view when running with more than 1 processor (everything is okay with -np 1).

Note that I have observed the same incorrect behavior on both Linux and OS-X. I have also gone back and made sure that the same code works with MPICH, MPICH2, and OpenMPI 1.2.* so I'm fairly confident that something has been changed or broken as of OpenMPI 1.3.*. Just today, I checked out the SVN repository version of OpenMPI and built and tested my code with that and the results are incorrect just as for the 1.3.1 tarball.

While I plan to continue to debug this and will try to put together a small test that demonstrates the issue, I thought that I would first send out this message to see if this might trigger a thought within the OpenMPI development team as to where this issue might be.

Please let me know if you have any ideas as I would very much appreciate it!

Thanks in advance,

Scott
--
Scott Collis
sscol...@me.com

Reply via email to