[OMPI users] external32 i/o not implemented?
Hi, I am attempting to use the 'external32' data representation in order read and write portable data files. I believe I understand how to do this, but I receive the following run-time error from the mpi_file_set_view call: MPI_FILE_SET_VIEW (line 118): **unsupporteddatarep If I replace 'external32' with 'native' in the mpi_file_set_view call then everything works, but the data file is written in little endian order on my Opteron cluster. Just for grins I also tried 'internal' but this produces the unsupporteddatarep error as well. Is the 'external32' data type implemented? Do I need to do something else to access it? I looked in the FAQs as well as the mailing list archives but I can not seem to find any threads discussing this issue. I would greatly appreciate any advice. I have attached my sample fortran codes (explicit_write.f, explicit_read.f, Makefkile) as well as the config.log, output of ompi_info, and my environment variable settings. I am running Fedora Core 4 with the 2.6.17-1.2142_FC4smp kernel. Thanks, ---Tom explicit_write.f Description: Binary data explicit_read.f Description: Binary data Makefile Description: Binary data config.log.gz Description: GNU Zip compressed data ompi.info.gz Description: GNU Zip compressed data env.out Description: Binary data
Re: [OMPI users] external32 i/o not implemented?
Rainer, Thank you for taking time to reply to my querry. Do I understand correctly that external32 data representation for i/o is not implemented? I am puzzled since the MPI-2 standard clearly indicates the existence of external32 and has lots of words regarding how nice this feature is for file interoperability. So do both Open MPI and MPIch2 not adhere to the standard in this regard? If this is really the case, how difficult is it to define a custom data representation that is 32-bit big endian on all platforms? Do you know of any documentation that explains how to do this? Thanks again. ---Tom Rainer Keller wrote: Hello Tom, like MPIch2, Open MPI also uses ROMIO as underlying MPI-IO implementation as an mca. ROMIO implements the native datarep. With best regards, Rainer On Friday 05 January 2007 20:38, l...@cora.nwra.com wrote: Hi, I am attempting to use the 'external32' data representation in order read and write portable data files. I believe I understand how to do this, but I receive the following run-time error from the mpi_file_set_view call: MPI_FILE_SET_VIEW (line 118): **unsupporteddatarep If I replace 'external32' with 'native' in the mpi_file_set_view call then everything works, but the data file is written in little endian order on my Opteron cluster. Just for grins I also tried 'internal' but this produces the unsupporteddatarep error as well. Is the 'external32' data type implemented? Do I need to do something else to access it? I looked in the FAQs as well as the mailing list archives but I can not seem to find any threads discussing this issue. I would greatly appreciate any advice. I have attached my sample fortran codes (explicit_write.f, explicit_read.f, Makefkile) as well as the config.log, output of ompi_info, and my environment variable settings. I am running Fedora Core 4 with the 2.6.17-1.2142_FC4smp kernel. Thanks, ---Tom -- === Thomas S. Lund Sr. Research Scientist Colorado Research Associates, a division of NorthWest Research Associates 3380 Mitchell Ln. Boulder, CO 80301 (303) 415-9701 X 209 (voice) (303) 415-9702 (fax) l...@cora.nwra.com ===
Re: [OMPI users] external32 i/o not implemented?
Rob, Thank you for your informative reply. I had no luck finding the external32 data representation in any of several mpi implementations and thus I do need to devise an alternative strategy. Do you know of a good reference explaining how to combine HDF5 with mpi? ---Tom Robert Latham wrote: On Mon, Jan 08, 2007 at 02:32:14PM -0700, Tom Lund wrote: Rainer, Thank you for taking time to reply to my querry. Do I understand correctly that external32 data representation for i/o is not implemented? I am puzzled since the MPI-2 standard clearly indicates the existence of external32 and has lots of words regarding how nice this feature is for file interoperability. So do both Open MPI and MPIch2 not adhere to the standard in this regard? If this is really the case, how difficult is it to define a custom data representation that is 32-bit big endian on all platforms? Do you know of any documentation that explains how to do this? Thanks again. Hi Tom You do understand correctly. I do not know of an MPI-IO implementation that supports external32. When you say "custom data representation" do you mean an MPI-IO user-defined data representation? An alternate approach would be to use a higher level library like parallel-netcdf or HDF5 (configured for parallel i/o). Those libraries already define a file format and implement all the necessary data conversion routines, and they have a wealth of ancilary tools and programs to work with their respective file formats. Additionally, those higher-level libraries will offer you more features than MPI-IO such as the ability to define atributes on variables and datafiles. Even better, there is the potential that these libraries might offer some clever optimizations for your workload, saving you the effort. Further, you can use those higher-level libraries on top of any MPI-IO implementation, not just OpenMPI or MPICH2. This is a little bit of a diversion from your original question, but to sum it up, I'd say one potential answer to the lack of external32 is to use a higher level library and sidestep the issue of MPI-IO data representations altogether. ==rob -- === Thomas S. Lund Sr. Research Scientist Colorado Research Associates, a division of NorthWest Research Associates 3380 Mitchell Ln. Boulder, CO 80301 (303) 415-9701 X 209 (voice) (303) 415-9702 (fax) l...@cora.nwra.com ===