Hi Rayson,
Just seen this.
In the end we've worked around it, by creating successive views of the file
that are all else than 2GB and then offsetting them to eventually read in
everything. It's a bit of a pain to keep track of, but it works at the moment.
I was intending on following your hin
Hi Rayson,
thanks for the informations!
The problem now is that I am in the same situation that guy described:
I must think of specific code to bypass that limitation and with the
need to write an irregularly indexed array
(http://www.mcs.anl.gov/research/projects/mpi/usingmpi2/examples/more
Hi Eric,
Sounds like it's also related to this problem reported by Scinet back in July:
http://www.open-mpi.org/community/lists/users/2012/07/19762.php
And I think I found the issue, but I still have not followed up with
the ROMIO guys yet. And I was not sure if Scinet was waiting for the
fix or
Hi Eric
Have you tried to create a user-defined MPI type
(say MPI_Type_Contiguous or MPI_Type_Vector) and pass them
to the MPI function calls, instead of MPI_LONGs?
Then you could use the new type and the new number
(i.e., an integer number smaller than "size", and
smaller than the maximum intege
Hi,
I get this error when trying to write 360 000 000 000 MPI_LONG:
with Openmpi-1.4.5:
ERROR Returned by MPI_File_write_all: 35
ERROR_string Returned by MPI_File_write_all: MPI_ERR_IO: input/output error
with Openmpi-1.6.2:
ERROR Returned by MPI_File_write_all: 13
ERROR_string Returned by MPI_