Dear all! 

using the split collective procedures MPI_File_write_all_begin() and
MPI_File_write_all_end() causes some confusion to me. 
It was my intention to implement asynchronous file i/o using those
procedures. The idea is to calculate some 'useful' stuff while writing
tons of data to disk. 
Well, buffering the local arrays and replacing MPI_File_write_all() by
MPI_File_write_all_begin() and MPI_File_write_all_end() wasn't that
hard. However my expectations were not met: 
 * No additional thread is spread out while writing
 * No benefit in runtime can be observed; The program simply waits as it
did before. 
It just comes with the difference spending a lot of time calling
MPI_File_write_all_end() instead of MPI_File_write_all(). It appears to
me that MPI_File_write_begin() does not trigger the actual write
command. 

What I have is a iterative workflow where data shall be written to disk
every 20th increment. So, what I want to achieve is writing to disk in
the background while further 20 iterations are processed. I am coding a
mixture of Fortran 90, F03, F08 and building with gfortran 4.7.1 and
OpenMPI 1.6 . 
The environment in question is a quad-socket system equipped with
E7-4860 running Debian Squeez. Unfortunately, I don't know about its i/o
capabilities but it is nothing fancy. 

Do you have any idea what I have done wrong? 

Thanks in advance! Cheers, 
Stefan 



Reply via email to