[OMPI users] MPI cartesian grid : cumulate a scalar value through the procs of a given axis of the grid

2018-05-02 Thread Pierre Gubernatis
Hello all...

I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller cubes...),
and I have communicators to address the procs, as for example a comm along
each of the 3 axes I,J,K, or along a plane IK,JK,IJ, etc..).

*I need to cumulate a scalar value (SCAL) through the procs which belong to
a given axis* (let's say the K axis, defined by I=J=0).

Precisely, the origin proc 0-0-0 has a given value for SCAL (say SCAL000).
I need to update the 'following' proc (0-0-1) by doing SCAL = SCAL +
SCAL000, and I need to *propagate* this updating along the K axis. At the
end, the last proc of the axis should have the total sum of SCAL over the
axis. (and of course, at a given rank k along the axis, the SCAL value =
sum over 0,1,   K of SCAL)

Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get deadlocks
that prove I don't handle this correctly...)
Thank you in any case.
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] MPI cartesian grid : cumulate a scalar value through the procs of a given axis of the grid

2018-05-14 Thread Pierre Gubernatis
Thank you to all of you for your answers (I was off up to now).

Actually my question was't well posed. I stated it more clearly in this
post, with the answer:

https://stackoverflow.com/questions/50130688/mpi-cartesian-grid-cumulate-a-scalar-value-through-the-procs-of-a-given-axis-o?noredirect=1#comment87286983_50130688

Thanks again.



2018-05-02 13:56 GMT+02:00 Peter Kjellström :

> On Wed, 2 May 2018 11:15:09 +0200
> Pierre Gubernatis  wrote:
>
> > Hello all...
> >
> > I am using a *cartesian grid* of processors which represents a spatial
> > domain (a cubic geometrical domain split into several smaller
> > cubes...), and I have communicators to address the procs, as for
> > example a comm along each of the 3 axes I,J,K, or along a plane
> > IK,JK,IJ, etc..).
> >
> > *I need to cumulate a scalar value (SCAL) through the procs which
> > belong to a given axis* (let's say the K axis, defined by I=J=0).
> >
> > Precisely, the origin proc 0-0-0 has a given value for SCAL (say
> > SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
> > = SCAL + SCAL000, and I need to *propagate* this updating along the K
> > axis. At the end, the last proc of the axis should have the total sum
> > of SCAL over the axis. (and of course, at a given rank k along the
> > axis, the SCAL value = sum over 0,1,   K of SCAL)
> >
> > Please, do you see a way to do this ? I have tried many things (with
> > MPI_SENDRECV and by looping over the procs of the axis, but I get
> > deadlocks that prove I don't handle this correctly...)
> > Thank you in any case.
>
> Why did you try SENDRECV? As far as I understand your description above
> data only flows one direction (along K)?
>
> There is no MPI collective to support the kind of reduction you
> describe but it should not be hard to do using normal SEND and RECV.
> Something like (simplified psuedo code):
>
> if (not_first_along_K)
>  MPI_RECV(SCAL_tmp, previous)
>  SCAL += SCAL_tmp
>
> if (not_last_along_K)
>  MPI_SEND(SCAL, next)
>
> /Peter K
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] MPI cartesian grid : cumulate a scalar value through the procs of a given axis of the grid

2018-05-17 Thread Pierre Gubernatis
yes, you are right..I didn't know MPI_scan and I finally jumped into, thanks

Le Lun 14 Mai 2018 20:11, Nathan Hjelm  a écrit :

> Still looks to me like MPI_Scan is what you want. Just need three
> additional communicators (one for each direction). With a recurive doubling
> MPI_Scan inplementation it is O(log n) compared to O(n) in time.
>
>
>
> On May 14, 2018, at 8:42 AM, Pierre Gubernatis <
> pierre.guberna...@gmail.com> wrote:
>
> Thank you to all of you for your answers (I was off up to now).
>
> Actually my question was't well posed. I stated it more clearly in this
> post, with the answer:
>
>
> https://stackoverflow.com/questions/50130688/mpi-cartesian-grid-cumulate-a-scalar-value-through-the-procs-of-a-given-axis-o?noredirect=1#comment87286983_50130688
>
> Thanks again.
>
>
>
> 2018-05-02 13:56 GMT+02:00 Peter Kjellström :
>
>> On Wed, 2 May 2018 11:15:09 +0200
>> Pierre Gubernatis  wrote:
>>
>> > Hello all...
>> >
>> > I am using a *cartesian grid* of processors which represents a spatial
>> > domain (a cubic geometrical domain split into several smaller
>> > cubes...), and I have communicators to address the procs, as for
>> > example a comm along each of the 3 axes I,J,K, or along a plane
>> > IK,JK,IJ, etc..).
>> >
>> > *I need to cumulate a scalar value (SCAL) through the procs which
>> > belong to a given axis* (let's say the K axis, defined by I=J=0).
>> >
>> > Precisely, the origin proc 0-0-0 has a given value for SCAL (say
>> > SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
>> > = SCAL + SCAL000, and I need to *propagate* this updating along the K
>> > axis. At the end, the last proc of the axis should have the total sum
>> > of SCAL over the axis. (and of course, at a given rank k along the
>> > axis, the SCAL value = sum over 0,1,   K of SCAL)
>> >
>> > Please, do you see a way to do this ? I have tried many things (with
>> > MPI_SENDRECV and by looping over the procs of the axis, but I get
>> > deadlocks that prove I don't handle this correctly...)
>> > Thank you in any case.
>>
>> Why did you try SENDRECV? As far as I understand your description above
>> data only flows one direction (along K)?
>>
>> There is no MPI collective to support the kind of reduction you
>> describe but it should not be hard to do using normal SEND and RECV.
>> Something like (simplified psuedo code):
>>
>> if (not_first_along_K)
>>  MPI_RECV(SCAL_tmp, previous)
>>  SCAL += SCAL_tmp
>>
>> if (not_last_along_K)
>>  MPI_SEND(SCAL, next)
>>
>> /Peter K
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users