Hi Gus,
Thanks for the suggestion. I've been thinking along those lines, but
it seems to have drawbacks. Consider the following MPI conversation:
TimeNODE 1 NODE 2
0local work local work
1post n-b recv
On Thu, Jun 4, 2009 at 2:54 PM, Lars Andersson wrote:
> Hi Gus,
>
> Thanks for the suggestion. I've been thinking along those lines, but
> it seems to have drawbacks. Consider the following MPI conversation:
>
> Time NODE 1 NODE 2
> 0 local work
On Thu, 2009-06-04 at 14:54 +1000, Lars Andersson wrote:
> Hi Gus,
>
> Thanks for the suggestion. I've been thinking along those lines, but
> it seems to have drawbacks. Consider the following MPI conversation:
>
> TimeNODE 1 NODE 2
> 0local work
Hi,
Thanks for your reply. Yup, I am engaging in such research.
In that case, I think I will just stick to 1.2.8 and maybe beside sending the
SIGTERM signal to kill the process, I will kill the orted service as well when
the spawned processes died or exited.
Just to find out more about the
> On Thu, 2009-06-04 at 14:54 +1000, Lars Andersson wrote:
>> Hi Gus,
>>
>> Thanks for the suggestion. I've been thinking along those lines, but
>> it seems to have drawbacks. Consider the following MPI conversation:
>>
>> Time NODE 1 NODE 2
>> 0 local work local work
Hi Jeff Squyres,
We have Dell powerconnect 2724 Gigabit switches to connect the nodes in our
cluster.
As you said, may be the speed of PCI bus is a bottleneck.
How can check it in practical?
What is your suggestion for the problem?
Thank you!
Axida
From: Jef
Hi all,
I've been trying to get overlapping computation and data transfer to
work, without much success so far. What I'm trying to achieve is:
NODE 1:
* Post nonblocking send (30MB data)
NODE 2:
1) Post nonblocking receive
2) do local work, while data is being received
3) comple
Hi there,
I'm encountering several issues at runtime in the following environment:
- a multi-thread (two threads) code (build with gcc -pthread) where:
. the main thread is the only one that do MPI stuff
. a control thread periodically records the current machine state
(doing malloc)
- OpenMPI-
On Jun 4, 2009, at 2:16 AM, Tee Wen Kai wrote:
Just to find out more about the consequences for exiting MPI
processes without calling MPI_Finalize, will it cause memory leak or
other fatal problem?
If you're exiting the process, you won't cause any kind of problems --
the OS will clean up
Tee Wen Kai -
You asked "Just to find out more about the consequences for exiting MPI
processes without calling MPI_Finalize, will it cause memory leak or other
fatal problem?"
Be aware that Jeff has offered you an OpenMPI implementation oriented
answer rather than an MPI standard oriented answe
Thank you Mr Jeff Squyres.
Followed your suggestion. Disabled vt.
OpenMPI-1.3.2 compiled successfully. And now it works like a charm.
Regards.
Wan Ruslan Yusoff
On Tue, Jun 2, 2009 at 12:55 AM, Jeff Squyres wrote:
> This looks like your compiler seg faulted. I think you should contact your
>
> Date: Thu, 4 Jun 2009 11:14:16 +1000
> From: Lars Andersson
> Subject: [OMPI users] Receiving MPI messages of unknown size
> To: us...@open-mpi.org
>
> When using blocking message passing, I have simply solved the problem
> by first sending a small, fixed size header containing the size of
> re
A few points in addition to what has already been said:
1. You can always post a receive for size N when the actual message is
<=N. You can use this fact to pre-post a receive with size N, where N
is large enough for the header and a medium-sized message. If you
have a short message, it'l
Dear all,
I still have problems with installing and using openmpi:
1°) In fact I just want to install openmpi on my machine (single i7 920)
to be able to develop parallel codes (using eclipse/photran/PTP) that I
will execute on a cluster later (using SGE batch queue system).
I therefore wonder wh
On Thu, 4 Jun 2009, Lars Andersson wrote:
I've been trying to get overlapping computation and data transfer to
work, without much success so far.
If this is so important to you, why do you insist in using Ethernet
and not a more HPC-oriented interconnect which can make progress in
the backgr
If it helps, note that Open MPI already includes hooks (and just added
some more) to support this area of research. Note that Open MPI does -
not- kill your job when a process dies or leaves without calling
MPI_Finalize. What it actually does is call an Error Manager (denoted
as "errmgr") in
>> I've been trying to get overlapping computation and data transfer to
>> work, without much success so far.
>
> If this is so important to you, why do you insist in using Ethernet
> and not a more HPC-oriented interconnect which can make progress in
> the background ?
We have a medium sized clus
Is there any compelling reason you're not using the wrappers
mpif77/mpif90?
On Thu, 2009-06-04 at 18:01 +0200, DEVEL Michel wrote:
> Dear all,
>
> I still have problems with installing and using openmpi:
>
> 1°) In fact I just want to install openmpi on my machine (single i7 920)
> to be able t
On Jun 3, 2009, at 1:07 PM, DEVEL Michel wrote:
I did not check your mails before because I was busy trying the gcc/
gfortran way.
I have other problems:
- for static linking I am missing plenty of ibv_* routines. I saw on
the net that they should be in a libibverbs library, but I cannot
fi
On Jun 4, 2009, at 12:01 PM, DEVEL Michel wrote:
1°) In fact I just want to install openmpi on my machine (single i7
920)
to be able to develop parallel codes (using eclipse/photran/PTP)
that I
will execute on a cluster later (using SGE batch queue system).
I therefore wonder what kind of co
20 matches
Mail list logo