This component is not supposed to get included in a cygwin build. The
name containing windows indicate it is for native windows compilation,
and not for cygwin nor SUA. Last time I checked I manage to compile
everything statically. Unfortunately, I never tried to do a dynamic
build ... I'll
Things have not changed with Leopard. :-(
On Apr 23, 2008, at 6:26 PM, Alberto Giannetti wrote:
Note: I'm running Tiger (Darwin 8.11.1). Things might have changed
with Leopard.
On Apr 23, 2008, at 5:30 PM, Jeff Squyres wrote:
On Apr 23, 2008, at 3:01 PM, Alberto Giannetti wrote:
I would li
Hi,
New to open MPI, but have used MPI before.
I am trying to compile open MPI on cygwin on widows XP. From what I
have read this should work?
Initially I hit a problem with the 1.2.6 standard download in that
time related header file was incorrect and the mailing list pointed me
to th
No oversubscription. I did not recompiled OMPI or installed from RPM.
On Apr 23, 2008, at 3:49 PM, Danesh Daroui wrote:
Do you really mean that Open-MPI uses busy loop in order to handle
incomming calls? It seems to be incorrect since
spinning is a very bad and inefficient technique for this pur
Note: I'm running Tiger (Darwin 8.11.1). Things might have changed
with Leopard.
On Apr 23, 2008, at 5:30 PM, Jeff Squyres wrote:
On Apr 23, 2008, at 3:01 PM, Alberto Giannetti wrote:
I would like to run one of my MPI processors to a single core on my
iMac Intel Core Duo system. I'm using re
On Wed, Apr 23, 2008 at 11:38:41PM +0200, Ingo Josopait wrote:
> I can think of several advantages that using blocking or signals to
> reduce the cpu load would have:
>
> - Reduced energy consumption
Not necessarily. Any time the program ends up running longer, the
cluster is up and running (and
Contributions are always welcome. :-)
http://www.open-mpi.org/community/contribute/
To be less glib: Open MPI represents the union of the interests of its
members. So far, we've *talked* internally about adding a spin-then-
block mechanism, but there's a non-trivial amount of work to ma
I can think of several advantages that using blocking or signals to
reduce the cpu load would have:
- Reduced energy consumption
- Running additional background programs could be done far more efficiently
- It would be much simpler to examine the load balance.
It may depend on the type of program
On Apr 23, 2008, at 3:01 PM, Alberto Giannetti wrote:
I would like to run one of my MPI processors to a single core on my
iMac Intel Core Duo system. I'm using release 1.2.4 on Darwin 8.11.1.
It looks like processor affinity is not supported for this kind of
configuration:
I'm afraid that OS X
Josh Hursey wrote:
On Apr 23, 2008, at 4:04 PM, Sharon Brunett wrote:
Hello,
I'm using openmpi-1.3a1r18241 on a 2 node configuration and having
troubles with the ompi-restart. I can successfully ompi-checkpoint
and ompi-restart a 1 way mpi code.
When I try a 2 way job running across 2 nod
On Apr 23, 2008, at 3:49 PM, Danesh Daroui wrote:
Do you really mean that Open-MPI uses busy loop in order to handle
incomming calls? It seems to be incorrect since
spinning is a very bad and inefficient technique for this purpose.
It depends on what you're optimizing for. :-) We're optimizi
On Apr 23, 2008, at 4:04 PM, Sharon Brunett wrote:
Hello,
I'm using openmpi-1.3a1r18241 on a 2 node configuration and having
troubles with the ompi-restart. I can successfully ompi-checkpoint
and ompi-restart a 1 way mpi code.
When I try a 2 way job running across 2 nodes, I get
bash-2.0
Hello,
I'm using openmpi-1.3a1r18241 on a 2 node configuration and having troubles
with the ompi-restart. I can successfully ompi-checkpoint and ompi-restart a 1
way mpi code.
When I try a 2 way job running across 2 nodes, I get
bash-2.05b$ ompi-restart -verbose ompi_global_snapshot_926.ckpt
[
Do you really mean that Open-MPI uses busy loop in order to handle
incomming calls? It seems to be incorrect since
spinning is a very bad and inefficient technique for this purpose. Why
don't you use blocking and/or signals instead of
that? I think the priority of this task is very high because p
I would like to run one of my MPI processors to a single core on my
iMac Intel Core Duo system. I'm using release 1.2.4 on Darwin 8.11.1.
It looks like processor affinity is not supported for this kind of
configuration:
$ ompi_info|grep affinity
MCA maffinity: first_use (MCA v1.
OMPI doesn't use SYSV shared memory; it uses mmaped files.
ompi_info will tell you all about the components installed. If you
see a BTL component named "sm", then shared memory support is
installed. I do not believe that we conditionally install sm on Linux
or OS X systems -- it should al
I am running the test program on Darwin 8.11.1, 1.83 Ghz Intel dual
core. My Open MPI install is 1.2.4.
I can't see any allocated shared memory segment on my system (ipcs -
m), although the receiver opens a couple of TCP sockets in listening
mode. It looks like my implementation does not use s
Please see another ongoing thread on this list about this exact topic:
http://www.open-mpi.org/community/lists/users/2008/04/5457.php
It unfortunately has a subject of "(no subject)", so it's not obvious
that this is what the thread is about.
On Apr 23, 2008, at 12:14 PM, Ingo Josopait
Because on-node communication typically uses shared memory, so we
currently have to poll. Additionally, when using mixed on/off-node
communication, we have to alternate between polling shared memory and
polling the network.
Additionally, we actively poll because it's the best way to lower
Thanks Torje. I wonder what is the benefit of looping on the incoming
message-queue socket rather than using system I/O signals, like read
() or select().
On Apr 23, 2008, at 12:10 PM, Torje Henriksen wrote:
Hi Alberto,
The blocked processes are in fact spin-waiting. While they don't have
an
I noticed that the cpu usage of an mpi program is always at 100 percent,
even if the tasks are doing nothing but wait for new data to arrive. Is
there an option to change this behavior, so that the tasks sleep until
new data arrive?
Why is this the default behavior, anyway? Is it really so costly
Hi Alberto,
The blocked processes are in fact spin-waiting. While they don't have
anything better to do (waiting for that message), they will check
their incoming message-queues in a loop.
So the MPI_Recv()-operation is blocking, but it doesn't mean that the
processes are blocked by the O
I have simple MPI program that sends data to processor rank 0. The
communication works well but when I run the program on more than 2
processors (-np 4) the extra receivers waiting for data run on > 90%
CPU load. I understand MPI_Recv() is a blocking operation, but why
does it consume so mu
23 matches
Mail list logo