On Tue, Jan 12, 2010 at 02:15:54PM -0800, Evan Smyth wrote:
> OpenMPI 1.4 (had same issue with 1.3.3) is configured with
> ./configure --prefix=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs \
> --enable-mpi-threads --with-io-romio-flags="--with-filesystems=pvfs2+ufs+nfs"

> PVFS 2.8.1 is configured to install in the default location (/usr/local) with
> ./configure --with-mpi=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs

In addition to Jeff's request for the build logs, do you have
'pvfs2-config' in your path?   

> I build and install these (in this order) and setup my PVFS2 space using
> instructions at pvfs.org. I am able to use this space using the
> /usr/local/bin/pvfs2-ls types of commands. I am simply running a 2-server
> config (2 data servers and the same 2 hosts are metadata servers). As I say,
> manually, this all seems fine (even when I'm not root). It may be
> relevant that I am *not* using the kernel interface for PVFS2 as I
> am just trying to get a
> better understanding of how this works.

That's a good piece of information.  I run in that configuration
often, so we should be able to make this work.

> It is perhaps relevant that I have not had to explicitly tell
> OpenMPI where I installed PVFS. I have told PVFS where I installed
> OpenMPI, though. This does seem slightly odd but there does not
> appear to be a way of telling OpenMPI this information. Perhaps it
> is not needed.

PVFS needs an MPI library only to build MPI-based testcases.  The
servers, client libraries, and utilities do not use MPI.

> In any event, I then build my test program against this OpenMPI and
> in that program I have the following call sequence (i is 0 and where
> mntPoint is the path to my pvfs2 mount point -- I also tried
> prefixing a "pvfs2:" in the front of this as I read somewhere that
> that was optional).

In this case, since you do not have the PVFS file system mounted, the
'pvfs2:' prefix is mandatory.  Otherwise, the MPI-IO library will try
to look for a directory that does not exist.

> Which will only execute on one of my ranks (the way I'm running it).
> No matter what I try, the MPI_File_open call fails with an
> MPI_ERR_ACCESS error code.  This suggests a permission problem but I
> am able to manually cp and rm from the pvfs2 space without problem
> so I am not at all clear on what the permission problem is. My
> access flags look fine to me (the MPI_MODE_UNIQUE_OPEN flag makes no
> difference in this case as I'm only opening a single file anyway).
> If I write this file to shared NFS storage, all is "fine"
> (obviously, I do not consider that a permanent solution, though).

> Does anyone have any idea why this is not working? Alternately or in
> addition, does anyone have step-by-step instructions for how to
> build and set up PVFS2 with OpenMPI as well as an example program
> because this is the first time I've attempted this so I may well be
> doing something wrong.

It sounds like you're on the right track.  I should update the PVFS
quickstart for the OpenMPI specifics.  In addition to pvfs2-ping and
pvfs2-ls, make sure you can pvfs2-cp files to and from your volume.
If those 3 utilities work, then your OpenMPI installation should work
as well.  

==rob

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA

Reply via email to