On Dec 4, 2009, at 7:46 AM, Nicolas Bock wrote:

> Hello list,
> 
> when I run the attached example, which spawns a "slave" process with 
> MPI_Comm_spawn(), I see the following:
> 
> nbock    19911  0.0  0.0  53980  2288 pts/0    S+   07:42   0:00 
> /usr/local/openmpi-1.3.4-gcc-4.4.2/bin/mpirun -np 3 ./master
> nbock    19912 92.1  0.0 158964  3868 pts/0    R+   07:42   0:23 ./master
> nbock    19913  0.0  0.0 158960  3812 pts/0    S+   07:42   0:00 ./master
> nbock    19914  0.0  0.0 158960  3800 pts/0    S+   07:42   0:00 ./master
> nbock    19929 91.1  0.0 158964  3896 pts/0    R+   07:42   0:20 ./slave arg1 
> arg2
> nbock    19930 95.8  0.0 158964  3900 pts/0    R+   07:42   0:22 ./slave arg1 
> arg2
> nbock    19931 94.7  0.0 158964  3896 pts/0    R+   07:42   0:21 ./slave arg1 
> arg2
> 
> The third column is the CPU usage according to top. I notice 3 master 
> processes, which I attribute to the fact that MPI_Comm_spawn really fork()s 
> and then spawns, but that's my uneducated guess.

Ummm....if you look at your cmd line

/usr/local/openmpi-1.3.4-gcc-4.4.2/bin/mpirun -np 3 ./master

 you will see that you specified 3 copies of master be run

:-)

> What I don't understand is why PID 19912 is using any CPU resources at all. 
> It's supposed to be waiting at the MPI_Barrier() for the slaves to finish. 
> What is PID 19912 doing?

It is polling at the barrier. This is done aggressively by default for 
performance. You can tell it to be less aggressive if you want via the 
yield_when_idle mca param.

> 
> Some more information:
> 
> $ uname -a
> Linux mujo 2.6.31-gentoo-r6 #2 SMP PREEMPT Fri Dec 4 07:08:07 MST 2009 x86_64 
> Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz GenuineIntel GNU/Linux
> 
> openmpi version 1.3.4
> gcc version 4.4.2
> 
> nick
> 
> <master.c><slave.c>_______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to