On Apr 30, 2010, at 10:36 PM, JiangjunZheng wrote:

> I am using Rocks+openmpi+hdf5+pvfs2. The soft on the rocks+pvfs2 cluster will 
> output hdf5 files after computing. However, when the output starts, it shows 
> errors:
> [root@nanohv pvfs2]# ./hdf5_mpio DH-ey-001400.20.h5
> Testing simple C MPIO program with 1 processes accessing file 
> DH-ey-001400.20.h5
> (Filename can be specified via program argument)
> Proc 0: hostname=nanohv.columbia.edu
> Proc 0: MPI_File_open failed (MPI_ERR_IO: input/output error)
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 
> 1.
> 
> If run in a none shared folder on the main node, the program goes well. it 
> shows:
> Proc 0: hostname=nanohv.columbia.edu
> Proc 0: all tests passed

This seems to indicate that the file failed to open for some reason in your 
first test.

Given that this is an HDF5 test program, you might want to ping them for more 
details...?

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to