Paul,
I tested your code in master and v1.10 ( on my local machine), and I get
for both version of ompio exactly the same (correct) output that you had
with romio. However, I also noticed that in the ompio version that is in
the v1.10 branch, the MPI_File_get_size function is not implemented on
lustre. Did you run your test by any chance on a lustre file system?
Thanks
Edgar
On 12/9/2015 8:06 AM, Edgar Gabriel wrote:
I will look at your test case and see what is going on in ompio. That
being said, the vast number of fixes and improvements that went into
ompio over the last two years were not back ported to the 1.8 (and thus
1.10) series, since it would have required changes to the interfaces of
the frameworks involved (and thus would have violated one of rules of
Open MPI release series) . Anyway, if there is a simple fix for your
test case for the 1.10 series, I am happy to provide a patch. It might
take me a day or two however.
Edgar
On 12/9/2015 6:24 AM, Paul Kapinos wrote:
Sorry, forgot to mention: 1.10.1
Open MPI: 1.10.1
Open MPI repo revision: v1.10.0-178-gb80f802
Open MPI release date: Nov 03, 2015
Open RTE: 1.10.1
Open RTE repo revision: v1.10.0-178-gb80f802
Open RTE release date: Nov 03, 2015
OPAL: 1.10.1
OPAL repo revision: v1.10.0-178-gb80f802
OPAL release date: Nov 03, 2015
MPI API: 3.0.0
Ident string: 1.10.1
On 12/09/15 11:26, Gilles Gouaillardet wrote:
Paul,
which OpenMPI version are you using ?
thanks for providing a simple reproducer, that will make things much easier from
now.
(and at first glance, that might not be a very tricky bug)
Cheers,
Gilles
On Wednesday, December 9, 2015, Paul Kapinos <kapi...@itc.rwth-aachen.de
<mailto:kapi...@itc.rwth-aachen.de>> wrote:
Dear Open MPI developers,
did OMPIO (1) reached 'usable-stable' state?
As we reported in (2) we had some trouble in building Open MPI with ROMIO,
which fact was hidden by OMPIO implementation stepping into the MPI_IO
breach. The fact 'ROMIO isn't AVBL' was detected after users complained
'MPI_IO don't work as expected with version XYZ of OpenMPI' and further
investigations.
Take a look at the attached example. It deliver different result in case
of
using ROMIO and OMPIO even with 1 MPI rank on local hard disk, cf. (3).
We've seen more examples of divergent behaviour but this one is quite
handy.
Is that a bug in OMPIO or did we miss something?
Best
Paul Kapinos
1) http://www.open-mpi.org/faq/?category=ompio
2) http://www.open-mpi.org/community/lists/devel/2015/12/18405.php
3) (ROMIO is default; on local hard drive at node 'cluster')
$ ompi_info | grep romio
MCA io: romio (MCA v2.0.0, API v2.0.0, Component
v1.10.1)
$ ompi_info | grep ompio
MCA io: ompio (MCA v2.0.0, API v2.0.0, Component
v1.10.1)
$ mpif90 main.f90
$ echo hello1234 > out.txt; $MPIEXEC -np 1 -H cluster ./a.out;
fileOffset, fileSize 10 10
fileOffset, fileSize 26 26
ierr 0
MPI_MODE_WRONLY, MPI_MODE_APPEND 4 128
$ export OMPI_MCA_io=ompio
$ echo hello1234 > out.txt; $MPIEXEC -np 1 -H cluster ./a.out;
fileOffset, fileSize 0 10
fileOffset, fileSize 0 16
ierr 0
MPI_MODE_WRONLY, MPI_MODE_APPEND 4 128
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/12/28145.php
--
Edgar Gabriel
Associate Professor
Parallel Software Technologies Lab http://pstl.cs.uh.edu
Department of Computer Science University of Houston
Philip G. Hoffman Hall, Room 524 Houston, TX-77204, USA
Tel: +1 (713) 743-3857 Fax: +1 (713) 743-3335
--