On Wed, Jul 23, 2008 at 02:24:03PM +0200, Gabriele Fatigati wrote:
> >You could always effect your own parallel IO (e.g., use MPI sends and
> receives to coordinate parallel reads and writes), but >why? It's already
> done in the MPI-IO implementation.
>
> Just a moment: you're saying that i can
On Wed, Jul 23, 2008 at 01:28:53PM +0100, Neil Storer wrote:
> Unless you have a parallel filesystem, such as GPFS, which is
> well-defined and does support file-locking, I would suggest writing to
> different files, or doing I/O via a single MPI task, or via MPI-IO.
I concur that NFS for a parall
On Wed, Jul 23, 2008 at 09:47:56AM -0400, Robert Kubrick wrote:
> HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I
> think the API is easier than direct MPI-I/O, maybe even easier than raw
> read/writes given its support for hierarchal objects and metadata.
In addition to t
Greetings,
I'm seeing a segfault in a code on Ubuntu 8.04 with gcc 4.2. I
recompiled the Debian lenny openmpi 1.2.7~rc2 package on Ubuntu, and
compiled the Debian lenny petsc and libmesh packages against that.
Everything works just fine in Debian lenny (gcc 4.3), but in Ubuntu
hardy it fails dur
Hello Carlos,
Sorry for the long delay in replying.
You may want to take a look at the Boost.MPI project:
http://www.boost.org/doc/html/mpi.html
It has a higher-level interface to MPI that is much more C++ friendly.
On Sat, Jul 12, 2008 at 3:30 PM, Carlos Henrique da Silva Santos
wrote:
> Dear,
Thanks! I will post a bug with PGI.
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Jul 23, 2008, at 4:50 PM, Brian Dobbins wrote:
Hi Brock,
Just to add my two cents now, I finally got around to building
WRF with PGI 7.2 as well. I not