HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I think the API is easier than direct MPI-I/O, maybe even easier than raw read/writes given its support for hierarchal objects and metadata.

HDF5 supports multiple storage models and it supports MPI-IO.
HDF5 has an open interface to access raw storage. This enables HDF5 files to be written to a variety of media, including sequential files, families of files, memory, Unix sockets (i.e., a network). New "Virtual File" drivers can be added to support new storage access mechanisms. HDF5 also supports MPI-IO with Parallel HDF5. When building HDF5, parallel support is included by configuring with the --enable- parallel option. A tutorial for Parallel HDF5 is included with the HDF5 Tutorial at:
  /HDF5/Tutor/

On Jul 23, 2008, at 8:28 AM, Neil Storer wrote:

Jeff,

In general NFS servers run a file-locking daemon that should enable
clients to lock files.

However, in Unix, there are two flavours of file locking, flock() from
BSD and lockf() from System V. It varies from system to system which of
these mechanisms work with NFS. In Solaris lockf() works with NFS, and
flock() is implemented via lockf(). On other systems, the results are
less consistent. For example, on some systems, lockf() is not
implemented at all, and flock() does not support NFS; while on other
systems, lockf() supports NFS but flock() does not.

Unless you have a parallel filesystem, such as GPFS, which is
well-defined and does support file-locking, I would suggest writing to
different files, or doing I/O via a single MPI task, or via MPI-IO.

Regards
       Neil

Jeff Squyres wrote:
On Jul 23, 2008, at 6:35 AM, Gabriele Fatigati wrote:

There is a whole chapter in the MPI standard about file I/O
operations. I'm quite confident you will find whatever you're looking
for there :)

Hi George, i know this chapter :) But i'm using MPI-1, not MPI-2. I
would like to know methods for I/O with MPI-1.

Open MPI builds ROMIO by default; there's no real distinction between
MPI-1 features and MPI-2 features in the Open MPI code base.

You could always effect your own parallel IO (e.g., use MPI sends and
receives to coordinate parallel reads and writes), but why?  It's
already done in the MPI-IO implementation.

FWIW: I do not believe that flock() is guaranteed to be safe across
network filesystems such as NFS.


--
+-----------------+--------------------------------- +------------------+ | Neil Storer | Head: Systems S/W Section | Operations Dept. | +-----------------+--------------------------------- +------------------+ | ECMWF, | email: neil.sto...@ecmwf.int | //=\\ //=\ \ | | Shinfield Park, | Tel: (+44 118) 9499353 | // \\// \\ | | Reading, | (+44 118) 9499000 x 2353 | ECMWF | | Berkshire, | Fax: (+44 118) 9869450 | ECMWF | | RG2 9AX, | | \\ //\ \ // | | UK | URL: http://www.ecmwf.int/ | \\=// \ \=// | +--+--------------+--------------------------------- +----------------+-+
   | ECMWF is the European Centre for Medium-Range Weather Forecasts |
   +-----------------------------------------------------------------+

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to