On Mon, Jun 8, 2009 at 11:07 PM, Lars Andersson wrote:
> I'd say that your own workaround here is to intersperse MPI_TEST's
> periodically. This will trigger OMPI's pipelined protocol for large
> messages, and should allow partial bursts of progress while you're
>
>> I've been trying to get overlapping computation and data transfer to
>> work, without much success so far.
>
> If this is so important to you, why do you insist in using Ethernet
> and not a more HPC-oriented interconnect which can make progress in
> the background ?
We have a medium sized clus
Hi all,
I've been trying to get overlapping computation and data transfer to
work, without much success so far. What I'm trying to achieve is:
NODE 1:
* Post nonblocking send (30MB data)
NODE 2:
1) Post nonblocking receive
2) do local work, while data is being received
3) comple
> On Thu, 2009-06-04 at 14:54 +1000, Lars Andersson wrote:
>> Hi Gus,
>>
>> Thanks for the suggestion. I've been thinking along those lines, but
>> it seems to have drawbacks. Consider the following MPI conversation:
>>
>> Time NODE 1 NODE
On Thu, Jun 4, 2009 at 2:54 PM, Lars Andersson wrote:
> Hi Gus,
>
> Thanks for the suggestion. I've been thinking along those lines, but
> it seems to have drawbacks. Consider the following MPI conversation:
>
> Time NODE 1 NODE 2
> 0
r way around this? Am I missing something?
/Lars
On Thu, Jun 4, 2009 at 2:34 PM, Lars Andersson wrote:
> Hi Lars
>
> I wonder if you could always use blocking message passing on the
> preliminary send/receive pair that transmits the message size/header,
> then use non-blocking
sing a cluster of 2-8 core x86-64 machines running Linux and
connected using ordinary 1Gbit ethernet.
Best regards,
Lars Andersson
On Fri, Apr 4, 2008 at 4:30 PM, Lars Andersson wrote:
> Hi,
>
> I'm just in the progress of moving our application from LAM/MPI to
> OpenMPI, mainly because OpenMPI makes it easier for a user to run
> multiple jobs(MPI universa) simultaneously. This is useful if a user
>
Hi,
I'm just in the progress of moving our application from LAM/MPI to
OpenMPI, mainly because OpenMPI makes it easier for a user to run
multiple jobs(MPI universa) simultaneously. This is useful if a user
wants to run smaller experiments without disturbing a large experiment
running in the backgr