I am unable to use PVFS2 with OpenMPI in a simple test program. My
configuration is given below. I'm running on RHEL5 with GigE (probably not
important).

OpenMPI 1.4 (had same issue with 1.3.3) is configured with
./configure --prefix=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs \
--enable-mpi-threads --with-io-romio-flags="--with-filesystems=pvfs2+ufs+nfs"

PVFS 2.8.1 is configured to install in the default location (/usr/local) with
./configure --with-mpi=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs

I build and install these (in this order) and setup my PVFS2 space using
instructions at pvfs.org. I am able to use this space using the
/usr/local/bin/pvfs2-ls types of commands. I am simply running a 2-server
config (2 data servers and the same 2 hosts are metadata servers). As I say,
manually, this all seems fine (even when I'm not root). It may be relevant that
I am *not* using the kernel interface for PVFS2 as I am just trying to get a
better understanding of how this works.

It is perhaps relevant that I have not had to explicitly tell OpenMPI where I
installed PVFS. I have told PVFS where I installed OpenMPI, though. This does
seem slightly odd but there does not appear to be a way of telling OpenMPI this
information. Perhaps it is not needed.

In any event, I then build my test program against this OpenMPI and in that
program I have the following call sequence (i is 0 and where mntPoint is the
path to my pvfs2 mount point -- I also tried prefixing a "pvfs2:" in the front
of this as I read somewhere that that was optional).

         sprintf(aname, "%s/%d.fdm", mntPoint, i);
         for(int j = 0; j < numFloats; j++) buf[j] = (float)i;
         int retval = MPI_SUCCESS;
         if(MPI_SUCCESS == (retval = MPI_File_open(MPI_COMM_SELF, aname,
                             MPI_MODE_RDWR|MPI_MODE_CREATE|MPI_MODE_UNIQUE_OPEN,
                                                   MPI_INFO_NULL, &fh)))
         {
             MPI_File_write(fh, (void*)buf, numFloats, MPI_FLOAT,
                        MPI_STATUS_IGNORE);
             MPI_File_close(&fh);
         } else {
             int errBufferLen;
             char errBuffer[MPI_MAX_ERROR_STRING];
             MPI_Error_string(retval, errBuffer, &errBufferLen);
             fprintf(stdout,"%d: open error on %s with code %s\n", rank,
aname,                  errBuffer);
         }

Which will only execute on one of my ranks (the way I'm running it). No matter
what I try, the MPI_File_open call fails with an MPI_ERR_ACCESS error code.
This suggests a permission problem but I am able to manually cp and rm from the
  pvfs2 space without problem so I am not at all clear on what the permission
problem is. My access flags look fine to me (the MPI_MODE_UNIQUE_OPEN flag
makes no difference in this case as I'm only opening a single file anyway). If
I write this file to shared NFS storage, all is "fine" (obviously, I do not
consider that a permanent solution, though).

Does anyone have any idea why this is not working? Alternately or in addition,
does anyone have step-by-step instructions for how to build and set up PVFS2
with OpenMPI as well as an example program because this is the first time I've
attempted this so I may well be doing something wrong.

Thanks in advance,
Evan

Reply via email to