Re: [OMPI users] parallel I/O on 64-bit indexed arays

2011-08-05 Thread Rob Latham
On Wed, Jul 27, 2011 at 06:13:05PM +0200, Troels Haugboelle wrote: > and we get good (+GB/s) performance when writing files from large runs. > > Interestingly, an alternative and conceptually simpler option is to > use MPI_FILE_WRITE_ORDERED, but the performance of that function on > Blue-Gene/P s

Re: [OMPI users] parallel I/O on 64-bit indexed arays

2011-07-27 Thread Troels Haugboelle
For the benefit of people running into similar problems and ending up reading this thread, we finally found a solution. One can use the mpi function MPI_TYPE_CREATE_HINDEXED to create an mpi data type with 32-bit local variable count and 64-bit offsets, which will work good enough for us for t

Re: [OMPI users] parallel I/O on 64-bit indexed arays

2011-06-07 Thread Jeff Squyres
On Jun 7, 2011, at 4:53 AM, Troels Haugboelle wrote: > In principle yes, but the problem is we have an unequal amount of particles > on each node, so the length of each array is not guaranteed to be divisible > by 2, 4 or any other number. If I have understood the definition of > MPI_TYPE_CREAT

Re: [OMPI users] parallel I/O on 64-bit indexed arays

2011-06-07 Thread Troels Haugboelle
If I understand your question correctly, this is *exactly* one of the reasons that the MPI Forum has been arguing about the use of a new type, "MPI_Count", for certain parameters that can get very, very large. Yes, that would help, but unfortunately only in the future. Sidenote: I believe

Re: [OMPI users] parallel I/O on 64-bit indexed arays

2011-06-06 Thread Jeff Squyres
If I understand your question correctly, this is *exactly* one of the reasons that the MPI Forum has been arguing about the use of a new type, "MPI_Count", for certain parameters that can get very, very large. - Sidenote: I believe that a workaround for you is to create some new MPI datatyp

[OMPI users] parallel I/O on 64-bit indexed arays

2011-06-06 Thread Troels Haugboelle
Hello! The problem I face is not open-mpi specific, but I hope still the MPI wizards on the list can help me nonetheless. I am running and developing a large-scale scientific code written in Fortran90. One type of objects are global 1-D vectors, which contains data for particles in the appli