Hit send before I finished. If each proc along the axis needs the partial sum 
(ie proc j gets sum for i = 0 -> j-1 SCAL[j]) then MPI_Scan will do that. 

> On May 2, 2018, at 6:29 AM, Nathan Hjelm <hje...@me.com> wrote:
> 
> MPI_Reduce would do this. I would use MPI_Comm_split to make an axis comm 
> then use reduce with the root being the last rank in the axis comm. 
> 
>> On May 2, 2018, at 6:11 AM, John Hearns via users <users@lists.open-mpi.org> 
>> wrote:
>> 
>> Also my inner voice is shouting that there must be an easy way to express 
>> this in Julia
>> https://discourse.julialang.org/t/apply-reduction-along-specific-axes/3301/16
>> 
>> OK, these are not the same stepwise cumulative operatiosn that you want, but 
>> the idea is close.
>> 
>> 
>> ps. Note to self - stop listening to the voices.
>> 
>> 
>>> On 2 May 2018 at 14:08, John Hearns <hear...@googlemail.com> wrote:
>>> Peter,  how large are your models, ie how many cells in each direction?
>>> Something inside of me is shouting that if the models are small enough then 
>>> MPI is not the way here.
>>> Assuming use of a Xeon processor there should be some AVX instructions 
>>> which can do this.
>>> 
>>> This is rather out of date, but is it helpful?   
>>> ttps://www.quora.com/Is-there-an-SIMD-architecture-that-supports-horizontal-cumulative-sum-Prefix-sum-as-a-single-instruction
>>> 
>>> https://software.intel.com/sites/landingpage/IntrinsicsGuide/
>>> 
>>> 
>>>> On 2 May 2018 at 13:56, Peter Kjellström <c...@nsc.liu.se> wrote:
>>>> On Wed, 2 May 2018 11:15:09 +0200
>>>> Pierre Gubernatis <pierre.guberna...@gmail.com> wrote:
>>>> 
>>>> > Hello all...
>>>> > 
>>>> > I am using a *cartesian grid* of processors which represents a spatial
>>>> > domain (a cubic geometrical domain split into several smaller
>>>> > cubes...), and I have communicators to address the procs, as for
>>>> > example a comm along each of the 3 axes I,J,K, or along a plane
>>>> > IK,JK,IJ, etc..).
>>>> > 
>>>> > *I need to cumulate a scalar value (SCAL) through the procs which
>>>> > belong to a given axis* (let's say the K axis, defined by I=J=0).
>>>> > 
>>>> > Precisely, the origin proc 0-0-0 has a given value for SCAL (say
>>>> > SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
>>>> > = SCAL + SCAL000, and I need to *propagate* this updating along the K
>>>> > axis. At the end, the last proc of the axis should have the total sum
>>>> > of SCAL over the axis. (and of course, at a given rank k along the
>>>> > axis, the SCAL value = sum over 0,1,   K of SCAL)
>>>> > 
>>>> > Please, do you see a way to do this ? I have tried many things (with
>>>> > MPI_SENDRECV and by looping over the procs of the axis, but I get
>>>> > deadlocks that prove I don't handle this correctly...)
>>>> > Thank you in any case.
>>>> 
>>>> Why did you try SENDRECV? As far as I understand your description above
>>>> data only flows one direction (along K)?
>>>> 
>>>> There is no MPI collective to support the kind of reduction you
>>>> describe but it should not be hard to do using normal SEND and RECV.
>>>> Something like (simplified psuedo code):
>>>> 
>>>> if (not_first_along_K)
>>>>  MPI_RECV(SCAL_tmp, previous)
>>>>  SCAL += SCAL_tmp
>>>> 
>>>> if (not_last_along_K)
>>>>  MPI_SEND(SCAL, next)
>>>> 
>>>> /Peter K
>>>> _______________________________________________
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> 
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to