Peter is correct. We need to find out what K is.
But we may never find out https://en.wikipedia.org/wiki/The_Trial

It would be fun if we could get some real-world dimesnions here and some
real-world numbers.
What range of numbers are these also?

On 2 May 2018 at 15:21, Peter Kjellström <c...@nsc.liu.se> wrote:

> On Wed, 2 May 2018 08:39:30 -0400
> Charles Antonelli <c...@umich.edu> wrote:
>
> > This seems to be crying out for MPI_Reduce.
>
> No, the described reduction cannot be implemented with MPI_Reduce (note
> the need for partial sums along the axis).
>
> > Also in the previous solution given, I think you should do the
> > MPI_Sends first. Doing the MPI_Receives first forces serialization.
>
> It needs that. The first thing that happens is that the first rank
> skips the recv and sends its SCAL to the 2nd process that just posted
> recv.
>
> Each process needs to complete the recv to know what to send (unless
> you split it out into many more sends which is possible).
>
> What's the best solution depends on if this part is performance
> critical and how large K is.
>
> /Peter K
>
> > Regards,
> > Charles
> ...
> > > Something like (simplified psuedo code):
> > >
> > > if (not_first_along_K)
> > > MPI_RECV(SCAL_tmp, previous)
> > > SCAL += SCAL_tmp
> > >
> > > if (not_last_along_K)
> > > MPI_SEND(SCAL, next)
> > >
> > > /Peter K
> > > _______________________________________________
> > > users mailing list
> > > users@lists.open-mpi.org
> > > https://lists.open-mpi.org/mailman/listinfo/users
> > >
>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to