Hello,
Using OpenMPI-1.2.3+pgi-7.0+hdf5 parallel + lustre is giving me the error:

File locking failed in ADIOI_Set_lock. If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching). [nyx357.engin.umich.edu:21186] MPI_ABORT invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1

Has anyone else managed to use hdf5+lustre+openMPI's romio setup? I can do MPI_File_wirte and MPI_File_read just fine.


Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


Reply via email to