On 3/16/2016 7:06 AM, Éric Chamberland wrote:
Le 16-03-14 15:07, Rob Latham a écrit :
On mpich's discussion list the point was made that libraries like HDF5
and (Parallel-)NetCDF provide not only the sort of platform
portability Eric desires, but also provide a self-describing file format.

==rob

But I do not agree with that.

If MPI can provide me a simple solution like user datarep, why in the
world would I bind my code to another library?

Instead of re-coding all my I/O in my code, I would prefer to contribute
to MPI I/O implementations out there...  :)

So, the never answered question: How big is that task????

Just speaking for OMPIO: there is a simple solution which would basically perform the necessary conversion of the user buffer as a first step. This implementation would be fairly straight forward, but would require a temporary buffer that is basically of the same size (or larger, depending on the format) as your input buffer, which would be a problem for many application scenarios.

The problem with trying to perform the conversion at a later step is, that all the buffers are treated as byte sequences internally, so the notion of data types is lost at one point in time. This is especially important for collective I/O, since the aggregation step might in some extreme situations even break up a datatype to be written in different cycles (or by different aggregators) internally.

That being said, I admit that I haven't spent too much time thinking about solutions to this problem. If there is interest, I am would be happy to work on it - and happy to accept help :-)

Edgar

Also, in 2012, I can state that having looked at HDF5, there was no
functions that used collective MPI I/O for *randomly distributed*
data...  Collective I/O was available only for "structured" data. So I
coded it all directly into MPI natives calls... and it works like a charm!

Thanks,

Eric

_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/03/28711.php

Reply via email to