Depending on the datatype and its order in memory, the "Block,*" and "*,Block" (which we used to call "slabs" in 3D) may be implemented by a simple scatter/gather call in MPI. The "Block,Block" distribution is a little more complex, but if you take advantage of MPI's derived datatypes, you may be able to reference an arbitrary 3D sub-space as a single data entity and then use gather/scatter with that.
I recommend that look through some of the examples in "MPI - The Complete Reference (Vol. 1)" by Snir, et.al. for use of MPI_Gather(), MPI_Scatter(), as well as the section on user-defined datatypes. Section 5.2 of "Using MPI" by Gropp, Lusk and Skjellum has an example code for an N-Body Problem which you may find useful. Hope this helps. -bill From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Alexandru Blidaru Sent: Tuesday, July 20, 2010 10:54 AM To: Open MPI Users Subject: Re: [OMPI users] Partitioning problem set data If there is an already existing implementation of the *Block or Block* methods that splits the array and sends the individual pieces to the proper nodes, can you point me to it please? On Tue, Jul 20, 2010 at 9:52 AM, Alexandru Blidaru <alexs...@gmail.com<mailto:alexs...@gmail.com>> wrote: Hi, I have a 3D array, which I need to split into equal n parts, so that each part would run on a different node. I found the picture in the attachment from this website (https://computing.llnl.gov/tutorials/parallel_comp/#DesignPartitioning) on the different ways to partition data. I am interested in the block methods, as the cyclic methods wouldn't really work for me at all. Obviously the *, BLOCK and the BLOCK, * methods would be really easy to implement for 3D arrays, assuming that the 2D picture would be looking at the array from the top. My question is if there are other better ways to do it from a performance standpoint? Thanks for your replies, Alex