> Re: MPI_Ssend(). This indeed fixes bug3, the process at rank 0 has
> reasonable memory usage and the execution proceeds normally.
>
> Re scalable: One second. I know well bug3 is not scalable, and when to
> use MPI_Isend. The point is programmers want to count on the MPI spec as
> written, as Ri
On Mon Feb 4, 2008 14:23:13... Sacerdoti, Federico wrote
> To keep this out of the weeds, I have attached a program called "bug3"
> that illustrates this problem on openmpi 1.2.5 using the openib BTL. In
> bug3 process with rank 0 uses all available memory buffering
> "unexpected" messages from it
> > I'm looking at a network where the number of endpoints is large enough that
> > everybody can't have a credit to start with, and the "offender" isn't any
> > single process, but rather a combination of processes doing N-to-1 where N
> > is sufficiently large. I can't just tell one process to s
> > Not to muddy the point, but if there's enough ambiguity in the Standard
> > for people to ignore the progress rule, then I think (hope) there's enough
> > ambiguity for people to ignore the sender throttling issue too ;)
>
> I understand your position, and I used to agree until I was forced to
>
> I am well aware of the scaling problems related to the standard
> send requirements in MPI. I t is a very difficult issue.
>
> However, here is what the standard says: MPI 1.2, page 32 lines 29-37
>
> [...]
I'm well aware of those words. They are highlighted (in pink no less) in on
page 50
>
> No, I assumed it based on comparisions between doing and not doing small
> msg rdma at various scales, from a paper Galen pointed out to me.
> http://www.cs.unm.edu/~treport/tr/05-10/Infiniband.pdf
>
Actually, I wasn't so much concerned with how you jumped to your conclusion.
I just wanted t