One thing to check for is that you specified the cflags/ldflags/libs for
pvfs2 when u configured OMPI:

that's what i do to get ompi to work over pvfs2 on our cluster:

./configure CFLAGS=-I/path-to-pvfs2/include/
LDFLAGS=-L/path-to-pvfs2/lib/ LIBS="-lpvfs2 -lpthread"
--with-wrapper-cflags=-I/path-to-pvfs2/include/
--with-wrapper-ldflags=-L/path-to-pvfs2/lib/
--with-wrapper-libs="-lpvfs2 -lpthread"
--with-io-romio-flags="--with-file-system=pvfs2+ufs+nfs
--with-pvfs2=/path-to-pvfs2/" ...

Thanks
Mohamad

JiangjunZheng wrote:
> Dear All,
>
> I am using Rocks+openmpi+hdf5+pvfs2. The soft on the rocks+pvfs2
> cluster will output hdf5 files after computing. However, when the
> output starts, it shows errors:
> [root@nanohv pvfs2]# ./hdf5_mpio DH-ey-001400.20.h5
> Testing simple C MPIO program with 1 processes accessing file
> DH-ey-001400.20.h5
> (Filename can be specified via program argument)
> Proc 0: hostname=nanohv.columbia.edu
> Proc 0: MPI_File_open failed (MPI_ERR_IO: input/output error)
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with
> errorcode 1.
>
> If run in a none shared folder on the main node, the program goes
> well. it shows:
> Proc 0: hostname=nanohv.columbia.edu
> Proc 0: all tests passed
>
> The following is the setting of the PATH and LD_LIBRARY_PATH on one of
> the nodes (I don't know whether it is because the hdf5 program can not
> find something from openmpi I/O. What will be needed when it is going
> to input and output files?):
> [root@compute-0-3 ~]# $PATH
> -bash: 
> /usr/kerberos/sbin:/usr/kerberos/bin:/usr/java/latest/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/rocks/bin:/opt/rocks/sbin:/opt/gm/bin:/opt/hdf5/bin:/opt/meep-mpi/bin:/opt/openmpi/bin:/opt/pvfs2/bin:/root/bin:
>
> [root@compute-0-3 ~]# $LD_LIBRARY_PATH
> -bash: 
> :/opt/gm/lib:/opt/hdf5/lib:/opt/meep-mpi/lib:/opt/openmpi/lib:/opt/pvfs2/lib: 
> No such file or directory
>
> [root@compute-0-3 ~]# mount -t pvfs2
> tcp://nanohv:3334/pvfs2-fs on /mnt/pvfs2 type pvfs2 (rw)
>
> [root@compute-0-3 ~]# ompi_info | grep gm
>                  MCA btl: gm (MCA v2.0, API v2.0, Component v1.4.1)
>   
>
> The attached "log.out" is obtained by "./configure --with-gm
> --prefix=/opt/openmpi | tee log.out"
>
> Can anyone give some suggestions what is the reason of the
> input/output error? MANY THANKS!!!
>
> Best,
> Jiangjun
>
>
> ------------------------------------------------------------------------
> ?????????????????????????????????????????? <http://ym.163.com/?from=od3>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to