I had been using an older variant of the needed flag for building romio (because the newer one was failing as the preceding suggests). I made this change and built with the correct romio flag. I next need to fix the ways pvfs2 build so that is uses -fPIC. Interestingly, about 95% of pvfs2 builds with this flag by default but the final 5% does not. It needs to. With that fixed, built and installed, I was able to rebuild openmpi correctly. My test program now works like a charm. I will give the *precise* steps I needed to build pvfs2 2.8.1 with openmpi 1.4 here for the record...

1. Determine where openmpi will be installed. I'm not certain that it needs to actually be installed there for this to work. If so, you will need to install openmpi twice. The first time, it clearly need not be built entirely correctly for pvfs2 (it can't be because setp 2 is a prerequisite for that) but probably building something without the "--with-io-romio-flags=..." should do if this actually must be installed at all. I'm betting it is not required but as I say, I have not verified this. It certainly works if it has been pre-installed as I just indicated.

2. Build pvfs2 correctly (I get conflicting info on whether the "--with-mpi=..." is needed but FWIW, this is how I built it and it installs into /usr/local which is it's default location...

cd <pvfs2-build-area>
setenv CFLAGS -fPIC
./configure --with-mpi=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs \ --enable-verbose-build
make all
<become root>
make install
exit

3. Build openmpi correctly. This is straightforward at this point. Also, the --enable-mpi-threads is not required for pvfs2 to work but I happen to also want this flag

cd <openmpi-build-area>
<make the change as described in the preciding post
ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h
--- a/ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h  Thu Sep 03
11:55:51 2009 -0500
+++ b/ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h  Mon Sep 21
10:16:27 2009 -0500
@@ -11,6 +11,10 @@
  #include "adio.h"
  #ifdef HAVE_PVFS2_H
  #include "pvfs2.h"
+#endif
+
+#ifdef PVFS2_VERSION_MAJOR
+#include "pvfs2-compat.h"
  #endif
>
./configure --prefix=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs \ --enable-mpi-threads --with-io-romio-flags="--with-file-system=pvfs2+ufs+nfs"
make all
<become root>
make install
exit

... and that's it. Hopefully, the next person who needs to figure this out will be helped by these instructions.

Evan

This seems to have done the trick.

Edgar Gabriel wrote:
I don't know whether its relevant for this problem or not, but a couple of weeks ago we also found that we had to apply the following patch to to compile ROMIO with OpenMPI over pvfs2. There is an additional header pvfs2-compat.h included in the ROMIO version of MPICH, but is somehow missing in the OpenMPI version....

ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h
--- a/ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h  Thu Sep 03
11:55:51 2009 -0500
+++ b/ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h  Mon Sep 21
10:16:27 2009 -0500
@@ -11,6 +11,10 @@
  #include "adio.h"
  #ifdef HAVE_PVFS2_H
  #include "pvfs2.h"
+#endif
+
+#ifdef PVFS2_VERSION_MAJOR
+#include "pvfs2-compat.h"
  #endif


Thanks
Edgar


Rob Latham wrote:
On Tue, Jan 12, 2010 at 02:15:54PM -0800, Evan Smyth wrote:
OpenMPI 1.4 (had same issue with 1.3.3) is configured with
./configure --prefix=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs \
--enable-mpi-threads --with-io-romio-flags="--with-filesystems=pvfs2+ufs+nfs"
PVFS 2.8.1 is configured to install in the default location (/usr/local) with
./configure --with-mpi=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs
In addition to Jeff's request for the build logs, do you have
'pvfs2-config' in your path?
I build and install these (in this order) and setup my PVFS2 space using
instructions at pvfs.org. I am able to use this space using the
/usr/local/bin/pvfs2-ls types of commands. I am simply running a 2-server
config (2 data servers and the same 2 hosts are metadata servers). As I say,
manually, this all seems fine (even when I'm not root). It may be
relevant that I am *not* using the kernel interface for PVFS2 as I
am just trying to get a
better understanding of how this works.
That's a good piece of information.  I run in that configuration
often, so we should be able to make this work.

It is perhaps relevant that I have not had to explicitly tell
OpenMPI where I installed PVFS. I have told PVFS where I installed
OpenMPI, though. This does seem slightly odd but there does not
appear to be a way of telling OpenMPI this information. Perhaps it
is not needed.
PVFS needs an MPI library only to build MPI-based testcases.  The
servers, client libraries, and utilities do not use MPI.

In any event, I then build my test program against this OpenMPI and
in that program I have the following call sequence (i is 0 and where
mntPoint is the path to my pvfs2 mount point -- I also tried
prefixing a "pvfs2:" in the front of this as I read somewhere that
that was optional).
In this case, since you do not have the PVFS file system mounted, the
'pvfs2:' prefix is mandatory.  Otherwise, the MPI-IO library will try
to look for a directory that does not exist.

Which will only execute on one of my ranks (the way I'm running it).
No matter what I try, the MPI_File_open call fails with an
MPI_ERR_ACCESS error code.  This suggests a permission problem but I
am able to manually cp and rm from the pvfs2 space without problem
so I am not at all clear on what the permission problem is. My
access flags look fine to me (the MPI_MODE_UNIQUE_OPEN flag makes no
difference in this case as I'm only opening a single file anyway).
If I write this file to shared NFS storage, all is "fine"
(obviously, I do not consider that a permanent solution, though).
Does anyone have any idea why this is not working? Alternately or in
addition, does anyone have step-by-step instructions for how to
build and set up PVFS2 with OpenMPI as well as an example program
because this is the first time I've attempted this so I may well be
doing something wrong.
It sounds like you're on the right track.  I should update the PVFS
quickstart for the OpenMPI specifics.  In addition to pvfs2-ping and
pvfs2-ls, make sure you can pvfs2-cp files to and from your volume.
If those 3 utilities work, then your OpenMPI installation should work
as well.
==rob



--
------
Evan Smyth
Software Architect
DreamWorks Animation SKG
e...@dreamworks.com
818.695.4105

Reply via email to