Hello!

The problem I face is not open-mpi specific, but I hope still the MPI wizards on the list can help me nonetheless.

I am running and developing a large-scale scientific code written in Fortran90. One type of objects are global 1-D vectors, which contains data for particles in the application. I want to use MPI commands for saving the particle data but the global 1D array holding the data can reach up to 100 billion elements, and array offsets and global sizes have to be 64-bit.

We use MPI_TYPE_CREATE_SUBARRAY for making a custom type and then MPI_FILE_SET_VIEW and MPI_FILE_WRITE_ALL for saving the 3D field data. This works with good performance on even very large installations / runs, but arguments to MPI_TYPE_CREATE_SUBARRAY are 32 bit integers, and that is not sufficient for the 1D-particle array. It needs 64-bit offsets and 64-bit global sizes. Local sizes for each thread are 32-bit though.

What MPI call could I use to make a custom MPI type that describes the above data, which has 64-bit indices / global size ?

As an example, for 3 threads the type layout would be :

Thread 0: n0 reals, n1 holes, n2 holes
Thread 1: n0 holes, n1 reals, n2 holes
Thread 2: n0 holes, n1 holes, n2 reals

The problem is I have to generalize that to 100 billion elements and 250k threads.

As a remark; given that data keeps on becoming bigger: It would be very nice if the arguments to MPI_TYPE_CREATE_SUBARRAY, and arguments to other similar routines could be 64-bit.

TIA,

Troels


Reply via email to