Hi Eugene

Your argument is correct.

I would cast the conclusion slightly different, though:
The "pencil" decomposition scales better with the number of
processors than the "book" decomposition.

Your example shows this well for the extreme case
where the number of processes
is equal to the array dimension that is being decomposed.
For, say, 4 processes, the difference is not so large.

Actually, "cell"/"chunk" XYZ 3D decompositions
probably scale even better than "books" and "pencils",
and one can extend the idea to N dimensions.
It is a general "surface-to-interior"/"area-to-volume"
ratio that governs this, I suppose.

In your example, "pencils"
wouldn't do well if using 10,000 processes,
yielding a 4:1  ghost to real cell ratio. :(
And 10,000 processes/processors is no longer out of reach today.
OTOH, 10,000 chunks of, say, 5x5x4 size, would
give a ghost-to-real-cell ratio of 13:10 (1.3)
Not stellar, but much better than the pencils' 4:1 (4) ratio.

However, the more dimensions that take part in the decomposition,
the more complex the code gets.
And here is where a simpler decomposition may become attractive,
despite some loss of efficiency/scaling.

The choice of decomposition may depend
a bit also on the algorithm employed,
along with the level of complexity one is willing to put in the code.
For instance, if you are solving the wave equation using the
pseudo-spectral method, the algorithm is somewhat simpler
if you use serial FFTs (rather than parallel ones),
and decompose a 3D domain in "books", rather than "pencils".
Atmosphere dynamics spectral code is normally
decomposed in "books" (along latitudinal stripes),
for similar reasons, I would guess.
By contrast, for finite-difference solvers,
the code complexity may be similar for "book" or "pencil",
thus, choosing the "pencil" decomposition may be an obvious choice,
to get better scaling.
Ocean dynamics equations, at least in the codes I've seen,
normally use "pencil" decomposition, and are probably harder to
handle using 3D "chunk" decomposition (due to the asymmetry imposed by
gravity).
Hence pencil rules in the oceans,
but not necessarily in the atmosphere, where books are more popular.

In any case, given that Derek's 3D C arrays can be naturally
decomposed in YZ "books" (X decomposition),
and that this decomposition may not require any the use of MPI types
for halo/ghost/overlap exchange (the YZ array sections could be
directly used by the MPI functions)
this may be a reasonable choice for Derek.
(I assume that his algorithm is compatible with "book" decomposition,
but it may not be.)

For somebody who says to be new to MPI,
reducing code complexity may be important.
He should be aware of the price paid in scaling/performance,
of course.

Pencil decomposition is certainly too hard to code in C,
but will require a bit more of effort than books, I suppose.
(Perhaps using of  MPI type vector, with stride, etc.)
And if Derek has 10,000+ processors, a huge 3D domain size,
and is crazy about scaling/performance,
he may want to start thinking of decomposing the domain in chunks.  :)

I am not an MPI expert, just a user/programmer,
so I may be dead wrong in what I wrote.
Please, correct me if I am wrong.

Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------

Eugene Loh wrote:
Gus Correa wrote:

Also, I wonder why you want to decompose on both X and Y ("pencils"),
and not only X ("books"),
which may give you a smaller/simpler domain decomposition
and communication footprint.
Whether you can or cannot do this way depends on your
computation, which I don't know about.

I'm not sure I'm following the entire thread, but higher-dimensional decompositions, though more complicated, can improve the communciation:computation ratio. For example, say you have a 100x100x100 grid to distribute over 100 processes. Even if you have only one ghost cell at each surface, a 1d decomposition would place a 1x100x100 "book" on each process with 2x100x100 ghost cells: a 2:1 ratio of ghost:real cells! That's a lot. In contrast, if you had 10x10x100 pencils, there would be (4*10+4)x100 ghosts. The ratio drops to 0.44. This is an extreme case, but it illustrates the point.

Indeed, maybe you could even drop to a 25x20x20 "box". Then the ghost:real ratio might be around 0.29 or so.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to