On Mon, Feb 04, 2008 at 04:23:13PM -0500, Sacerdoti, Federico wrote:
> Bug3 is a test-case derived from a real, scalable application (desmond
> for molecular dynamics) that several experienced MPI developers have
> worked on. Note the MPI_Send calls of processes N>0 are *blocking*; the
> openmpi si
Hi,
I've been working on MPI piggyback technique as a part of my PhD work.
Although MPI does not provide a native support, there are several different
solutions to transmit piggyback data over every MPI communication. You may
find a brief overview in papers [1, 2]. This includes copying the origi
Hi Gleb
There is no misunderstanding of the MPI standard or the definition of
blocking in the bug3 example. Both bug 3 and the example I provided are
valid MPI.
As you say, blocking means the send buffer can be reused when the MPI_Send
returns. This is exactly what bug3 is count on.
MPI is a r
On Tue, Feb 05, 2008 at 08:07:59AM -0500, Richard Treumann wrote:
> There is no misunderstanding of the MPI standard or the definition of
> blocking in the bug3 example. Both bug 3 and the example I provided are
> valid MPI.
>
> As you say, blocking means the send buffer can be reused when the MP
Oleg,
Interesting work. You mentioned late in your email that you believe
that adding support for piggybacking to the MPI standard would be the
best solution. As you may know, the MPI Forum has reconvened and there
is a working group for Fault Tolerance. This working group is
discussing a
Oleg,
Is there an implementation in Open MPI of your techniques ? Can we put
our greedy nasty pawns on it ?
Thanks for the link, Josh.
Aurelien
Le 5 févr. 08 à 08:39, Josh Hursey a écrit :
Oleg,
Interesting work. You mentioned late in your email that you believe
that adding support for p
Wow this sparked a much more heated discussion than I was expecting. I
was just commenting that the behaviour the original author (Federico
Sacerdoti) mentioned would explain something I observed in one of my
early trials of OpenMPI. But anyway, because it seems that quite a few
people were interes
So with an Isend your program becomes valid MPI and a very nice
illustrarion of why the MPI standard cannot limit envelops (or send/recv
descriptors) and why at some point the number of descriptors can blow the
limits. It also illustrates how the management of eager messages remains
workable. (Not
Thank you Josh, that's interesting. I'll have a look.
--Oleg
On Feb 5, 2008 2:39 PM, Josh Hursey wrote:
> Oleg,
>
> Interesting work. You mentioned late in your email that you believe
> that adding support for piggybacking to the MPI standard would be the
> best solution. As you may know, the MP
Hi Jody,
Just to make sure I understand. Your desktop is plankton, and you want
to run a job on both plankton and nano, and have xterms show up on nano.
It looks like you are already doing this, but to make sure, the way I
would use xhost is:
plankton$ xhost +nano_00
plankton$ mpirun -np 4 -
Hi Tim
> Your desktop is plankton, and you want
> to run a job on both plankton and nano, and have xterms show up on nano.
Not on nano, but on plankton, but ithink this was just a typo :)
> It looks like you are already doing this, but to make sure, the way I
> would use xhost is:
> plankton$ xh
Re: MPI_Ssend(). This indeed fixes bug3, the process at rank 0 has
reasonable memory usage and the execution proceeds normally.
Re scalable: One second. I know well bug3 is not scalable, and when to
use MPI_Isend. The point is programmers want to count on the MPI spec as
written, as Richard pointe
Jody,
jody wrote:
Hi Tim
Your desktop is plankton, and you want
to run a job on both plankton and nano, and have xterms show up on nano.
Not on nano, but on plankton, but ithink this was just a typo :)
Correct.
It looks like you are already doing this, but to make sure, the way I
would us
> Re: MPI_Ssend(). This indeed fixes bug3, the process at rank 0 has
> reasonable memory usage and the execution proceeds normally.
>
> Re scalable: One second. I know well bug3 is not scalable, and when to
> use MPI_Isend. The point is programmers want to count on the MPI spec as
> written, as Ri
Ron's comments are probably dead on for an application like bug3.
If bug3 is long running and libmpi is doing eager protocol buffer
management as I contend the standard requires then the producers will not
get far ahead of the consumer before they are forced to synchronous send
under the covers a
15 matches
Mail list logo