Hi Steven, Dmytry

Not sure if this web page is still valid or totally out of date,
but there it goes anyway, in the hopes that it may help:

http://www.mcs.anl.gov/research/projects/mpi/mpich1-old/docs/install/node38.htm

On the other hand, one expert seems to dismiss NFS
for paralllel IO:

http://www.open-mpi.org/community/lists/users/2008/07/6125.php

I must say that this has been a gray area for me too.
It would be nice if MPI - or the various MPIs -
documentation told us a bit more clearly what types of
underlying file system support MPI parallel IO:
local disk (ext?, xfs, etc), NFS mounts,
the various parallel file systems (PVFS/OrangeFS, Lustre,
GlusterFS, etc).
And perhaps provide some setup information, plus
functionality, and performance comparisons.

My two cents,
Gus Correa


On 11/07/2013 12:21 PM, Dmitry N. Mikushin wrote:
Not sure if this is related, but:

I've seen a case of performance degradation on NFS and Lustre when
writing NetCDF files. The reason was that the file was filled with a
loop writing one 4-byte record at once. Performance became close to
local hard drive, when I simply introduced buffering of records and
writing them to files with one row at once.

- D.


2013/11/7 Steven G Johnson<stev...@mit.edu>:
The simple C program attached below hangs on MPI_File_write when I am using an 
NFS-mounted filesystem.   Is MPI-IO supported in OpenMPI for NFS filesystems?

I'm using OpenMPI 1.4.5 on Debian stable (wheezy), 64-bit Opteron CPU, Linux 
3.2.51.   I was surprised by this because the problems only started occurring 
recently when I upgraded my Debian system to wheezy; with OpenMPI in the 
previous Debian release, output to NFS-mounted filesystems worked fine.

Is there any easy way to get this working?  Any tips are appreciated.

Regards,
Steven G. Johnson

-----------------------------------------------------------------------------------
#include<stdio.h>
#include<string.h>
#include<mpi.h>

void perr(const char *label, int err)
{
     char s[MPI_MAX_ERROR_STRING];
     int len;
     MPI_Error_string(err, s,&len);
     printf("%s: %d = %s\n", label, err, s);
}

int main(int argc, char **argv)
{
     MPI_Init(&argc,&argv);

     MPI_File fh;
     int err;
     err = MPI_File_open(MPI_COMM_WORLD, "tstmpiio.dat", MPI_MODE_CREATE | 
MPI_MODE_WRONLY, MPI_INFO_NULL,&fh);
     perr("open", err);

     const char s[] = "Hello world!\n";
     MPI_Status status;
     err = MPI_File_write(fh, (void*) s, strlen(s), MPI_CHAR,&status);
     perr("write", err);

     err = MPI_File_close(&fh);
     perr("close", err);

     MPI_Finalize();
     return 0;
}
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to