Re: [OMPI users] Problem compiling open MPI on cygwin on windows

2008-04-23 Thread George Bosilca
This component is not supposed to get included in a cygwin build. The name containing windows indicate it is for native windows compilation, and not for cygwin nor SUA. Last time I checked I manage to compile everything statically. Unfortunately, I never tried to do a dynamic build ... I'll

Re: [OMPI users] Processor affinitiy

2008-04-23 Thread Jeff Squyres
Things have not changed with Leopard. :-( On Apr 23, 2008, at 6:26 PM, Alberto Giannetti wrote: Note: I'm running Tiger (Darwin 8.11.1). Things might have changed with Leopard. On Apr 23, 2008, at 5:30 PM, Jeff Squyres wrote: On Apr 23, 2008, at 3:01 PM, Alberto Giannetti wrote: I would li

[OMPI users] Problem compiling open MPI on cygwin on windows

2008-04-23 Thread Michael
Hi, New to open MPI, but have used MPI before. I am trying to compile open MPI on cygwin on widows XP. From what I have read this should work? Initially I hit a problem with the 1.2.6 standard download in that time related header file was incorrect and the mailing list pointed me to th

Re: [OMPI users] (no subject)

2008-04-23 Thread Alberto Giannetti
No oversubscription. I did not recompiled OMPI or installed from RPM. On Apr 23, 2008, at 3:49 PM, Danesh Daroui wrote: Do you really mean that Open-MPI uses busy loop in order to handle incomming calls? It seems to be incorrect since spinning is a very bad and inefficient technique for this pur

Re: [OMPI users] Processor affinitiy

2008-04-23 Thread Alberto Giannetti
Note: I'm running Tiger (Darwin 8.11.1). Things might have changed with Leopard. On Apr 23, 2008, at 5:30 PM, Jeff Squyres wrote: On Apr 23, 2008, at 3:01 PM, Alberto Giannetti wrote: I would like to run one of my MPI processors to a single core on my iMac Intel Core Duo system. I'm using re

[OMPI users] Busy waiting [was Re: (no subject)]

2008-04-23 Thread Barry Rountree
On Wed, Apr 23, 2008 at 11:38:41PM +0200, Ingo Josopait wrote: > I can think of several advantages that using blocking or signals to > reduce the cpu load would have: > > - Reduced energy consumption Not necessarily. Any time the program ends up running longer, the cluster is up and running (and

Re: [OMPI users] (no subject)

2008-04-23 Thread Jeff Squyres
Contributions are always welcome. :-) http://www.open-mpi.org/community/contribute/ To be less glib: Open MPI represents the union of the interests of its members. So far, we've *talked* internally about adding a spin-then- block mechanism, but there's a non-trivial amount of work to ma

Re: [OMPI users] (no subject)

2008-04-23 Thread Ingo Josopait
I can think of several advantages that using blocking or signals to reduce the cpu load would have: - Reduced energy consumption - Running additional background programs could be done far more efficiently - It would be much simpler to examine the load balance. It may depend on the type of program

Re: [OMPI users] Processor affinitiy

2008-04-23 Thread Jeff Squyres
On Apr 23, 2008, at 3:01 PM, Alberto Giannetti wrote: I would like to run one of my MPI processors to a single core on my iMac Intel Core Duo system. I'm using release 1.2.4 on Darwin 8.11.1. It looks like processor affinity is not supported for this kind of configuration: I'm afraid that OS X

Re: [OMPI users] openmpi-1.3a1r18241 ompi-restart issue

2008-04-23 Thread Sharon Brunett
Josh Hursey wrote: On Apr 23, 2008, at 4:04 PM, Sharon Brunett wrote: Hello, I'm using openmpi-1.3a1r18241 on a 2 node configuration and having troubles with the ompi-restart. I can successfully ompi-checkpoint and ompi-restart a 1 way mpi code. When I try a 2 way job running across 2 nod

Re: [OMPI users] (no subject)

2008-04-23 Thread Jeff Squyres
On Apr 23, 2008, at 3:49 PM, Danesh Daroui wrote: Do you really mean that Open-MPI uses busy loop in order to handle incomming calls? It seems to be incorrect since spinning is a very bad and inefficient technique for this purpose. It depends on what you're optimizing for. :-) We're optimizi

Re: [OMPI users] openmpi-1.3a1r18241 ompi-restart issue

2008-04-23 Thread Josh Hursey
On Apr 23, 2008, at 4:04 PM, Sharon Brunett wrote: Hello, I'm using openmpi-1.3a1r18241 on a 2 node configuration and having troubles with the ompi-restart. I can successfully ompi-checkpoint and ompi-restart a 1 way mpi code. When I try a 2 way job running across 2 nodes, I get bash-2.0

[OMPI users] openmpi-1.3a1r18241 ompi-restart issue

2008-04-23 Thread Sharon Brunett
Hello, I'm using openmpi-1.3a1r18241 on a 2 node configuration and having troubles with the ompi-restart. I can successfully ompi-checkpoint and ompi-restart a 1 way mpi code. When I try a 2 way job running across 2 nodes, I get bash-2.05b$ ompi-restart -verbose ompi_global_snapshot_926.ckpt [

Re: [OMPI users] (no subject)

2008-04-23 Thread Danesh Daroui
Do you really mean that Open-MPI uses busy loop in order to handle incomming calls? It seems to be incorrect since spinning is a very bad and inefficient technique for this purpose. Why don't you use blocking and/or signals instead of that? I think the priority of this task is very high because p

[OMPI users] Processor affinitiy

2008-04-23 Thread Alberto Giannetti
I would like to run one of my MPI processors to a single core on my iMac Intel Core Duo system. I'm using release 1.2.4 on Darwin 8.11.1. It looks like processor affinity is not supported for this kind of configuration: $ ompi_info|grep affinity MCA maffinity: first_use (MCA v1.

Re: [OMPI users] (no subject)

2008-04-23 Thread Jeff Squyres
OMPI doesn't use SYSV shared memory; it uses mmaped files. ompi_info will tell you all about the components installed. If you see a BTL component named "sm", then shared memory support is installed. I do not believe that we conditionally install sm on Linux or OS X systems -- it should al

Re: [OMPI users] (no subject)

2008-04-23 Thread Alberto Giannetti
I am running the test program on Darwin 8.11.1, 1.83 Ghz Intel dual core. My Open MPI install is 1.2.4. I can't see any allocated shared memory segment on my system (ipcs - m), although the receiver opens a couple of TCP sockets in listening mode. It looks like my implementation does not use s

Re: [OMPI users] idle calls?

2008-04-23 Thread Jeff Squyres
Please see another ongoing thread on this list about this exact topic: http://www.open-mpi.org/community/lists/users/2008/04/5457.php It unfortunately has a subject of "(no subject)", so it's not obvious that this is what the thread is about. On Apr 23, 2008, at 12:14 PM, Ingo Josopait

Re: [OMPI users] (no subject)

2008-04-23 Thread Jeff Squyres
Because on-node communication typically uses shared memory, so we currently have to poll. Additionally, when using mixed on/off-node communication, we have to alternate between polling shared memory and polling the network. Additionally, we actively poll because it's the best way to lower

Re: [OMPI users] (no subject)

2008-04-23 Thread Alberto Giannetti
Thanks Torje. I wonder what is the benefit of looping on the incoming message-queue socket rather than using system I/O signals, like read () or select(). On Apr 23, 2008, at 12:10 PM, Torje Henriksen wrote: Hi Alberto, The blocked processes are in fact spin-waiting. While they don't have an

[OMPI users] idle calls?

2008-04-23 Thread Ingo Josopait
I noticed that the cpu usage of an mpi program is always at 100 percent, even if the tasks are doing nothing but wait for new data to arrive. Is there an option to change this behavior, so that the tasks sleep until new data arrive? Why is this the default behavior, anyway? Is it really so costly

Re: [OMPI users] (no subject)

2008-04-23 Thread Torje Henriksen
Hi Alberto, The blocked processes are in fact spin-waiting. While they don't have anything better to do (waiting for that message), they will check their incoming message-queues in a loop. So the MPI_Recv()-operation is blocking, but it doesn't mean that the processes are blocked by the O

[OMPI users] (no subject)

2008-04-23 Thread Alberto Giannetti
I have simple MPI program that sends data to processor rank 0. The communication works well but when I run the program on more than 2 processors (-np 4) the extra receivers waiting for data run on > 90% CPU load. I understand MPI_Recv() is a blocking operation, but why does it consume so mu