Gus Correa wrote:
> At each time step you exchange halo/ghost sections across
> neighbor subdomains, using MPI_Send/MPI_Recv,
> or MPI_SendRecv.
> Even better if you use non-blocking calls
> MPI_ISend/MPI_[I]Recv/MPI_Wait[all].
> Read about the advantages of non-blocking communication
> in the "MPI The Complete Reference, Vol 1" book that I suggested
> to you.

"Using MPI, 2nd Edition, by Gropp, et al, (the same people who wrote the
above book, I think), also has a good discussion of this.
> 
> You can do the bookkeeping of "which subdomain/process_rank is my
> left neighbor?" etc, yourself, if you create domain neighbor
> tables when the program initializes.
> Alternatively, and more elegantly, you can use the MPI
> Cartesian topology functions to take care of this for you.

Also  described in Using MPI, 2nd Ed.

-- 
Prentice

Reply via email to