MPI_Scan/MPI_Exscan are easy to forget but really useful.
-Nathan
> On May 2, 2018, at 7:21 AM, Peter Kjellström wrote:
>
> On Wed, 02 May 2018 06:32:16 -0600
> Nathan Hjelm wrote:
>
> > Hit send before I finished. If each proc along the axis needs the
> > partial sum (ie proc j gets sum for
Peter is correct. We need to find out what K is.
But we may never find out https://en.wikipedia.org/wiki/The_Trial
It would be fun if we could get some real-world dimesnions here and some
real-world numbers.
What range of numbers are these also?
On 2 May 2018 at 15:21, Peter Kjellström wrote:
>
On Wed, 2 May 2018 08:39:30 -0400
Charles Antonelli wrote:
> This seems to be crying out for MPI_Reduce.
No, the described reduction cannot be implemented with MPI_Reduce (note
the need for partial sums along the axis).
> Also in the previous solution given, I think you should do the
> MPI_Sen
On Wed, 02 May 2018 06:32:16 -0600
Nathan Hjelm wrote:
> Hit send before I finished. If each proc along the axis needs the
> partial sum (ie proc j gets sum for i = 0 -> j-1 SCAL[j]) then
> MPI_Scan will do that.
I must confess that I had forgotten about MPI_Scan when I replied to
the OP. In fa
This seems to be crying out for MPI_Reduce.
Also in the previous solution given, I think you should do the MPI_Sends
first. Doing the MPI_Receives first forces serialization.
Regards,
Charles
On Wed, May 2, 2018 at 7:56 AM, Peter Kjellström wrote:
> On Wed, 2 May 2018 11:15:09 +0200
> Pierre
Hit send before I finished. If each proc along the axis needs the partial sum
(ie proc j gets sum for i = 0 -> j-1 SCAL[j]) then MPI_Scan will do that.
> On May 2, 2018, at 6:29 AM, Nathan Hjelm wrote:
>
> MPI_Reduce would do this. I would use MPI_Comm_split to make an axis comm
> then use re
MPI_Reduce would do this. I would use MPI_Comm_split to make an axis comm then
use reduce with the root being the last rank in the axis comm.
> On May 2, 2018, at 6:11 AM, John Hearns via users
> wrote:
>
> Also my inner voice is shouting that there must be an easy way to express
> this in J
Pierre, I may not be able to help you directly. But I had better stop
listening to the voices.
Mail me off list please.
This might do the trick using Julia
http://juliadb.org/latest/api/aggregation.html
On 2 May 2018 at 14:11, John Hearns wrote:
> Also my inner voice is shouting that there must
Also my inner voice is shouting that there must be an easy way to express
this in Julia
https://discourse.julialang.org/t/apply-reduction-along-specific-axes/3301/16
OK, these are not the same stepwise cumulative operatiosn that you want,
but the idea is close.
ps. Note to self - stop listening
Peter, how large are your models, ie how many cells in each direction?
Something inside of me is shouting that if the models are small enough then
MPI is not the way here.
Assuming use of a Xeon processor there should be some AVX instructions
which can do this.
This is rather out of date, but is
On Wed, 2 May 2018 11:15:09 +0200
Pierre Gubernatis wrote:
> Hello all...
>
> I am using a *cartesian grid* of processors which represents a spatial
> domain (a cubic geometrical domain split into several smaller
> cubes...), and I have communicators to address the procs, as for
> example a comm
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller cubes...),
and I have communicators to address the procs, as for example a comm along
each of the 3 axes I,J,K, or along a plane IK,JK,IJ, etc..).
*I n
12 matches
Mail list logo