Hi Sean

> [Sean] I'm working through the strace output to follow the progression on the
> head node. It looks like mpirun consults '/bpfs/self' and determines that the
> request is to be run on the local machine so it fork/execs 'orted' which then
> runs 'hostname'. 'mpirun' didn't consult '/bpfs' or utilize 'rsh' after the
> determination to run on the local machine was made. When the 'hostname'
> command completes, 'orted' receives the SIGCHLD signal, performs some work and
> then both 'mpirun' and 'orted' go into what appears to be a poll() waiting for
> events.

This is the core of the problem - I confess to being blown away that mpirun
is fork/exec'ing that local orted. I will have to go through the code and
try to figure that one out - we have never seen that behavior. There should
be no way at all for that to happen.

The problem is that, if the code fork/exec's that local orted, then the
bproc code components have no idea it exists. Hence, the system doesn't know
it should shutdown when complete because (a) there is still a lingering
orted out there, but (b) the dominant component (bproc, in this case) has no
earthly idea where it is or how to tell it to go away.

FWIW, this problem will vanish in 1.3 due to a major change in the way we
handle orteds. However, the idea that we could fork/exec an orted under
bproc is something we definitely will have to fix.

Sorry for the problem. I'll have to see if there is a fix for 1.2 - it may
require too much code change and have to wait for 1.3. I'll advise as soon
as I figure this one out.

Ralph

> 
> 
> Hope that helps at least a little.
> 
> [Sean] I appreciate the help. We are running processes on the head node
> because the head node is the only node which can access external resources
> (storage devices).
> 
> 
> Ralph
> 
> 
> 
> 
> 
> On 6/11/07 1:04 PM, "Kelley, Sean" <sean.kel...@solers.com> wrote:
> 
>> I forgot to add that we are using 'bproc'. Launching processes on the compute
>> nodes using bproc works well, I'm not sure if bproc is involved when
>> processes are launched on the local node.
>> 
>> Sean
>> 
>> 
>> From: users-boun...@open-mpi.org on behalf of Kelley, Sean
>> Sent: Mon 6/11/2007 2:07 PM
>> To: us...@open-mpi.org
>> Subject: [OMPI users] mpirun hanging when processes started on head node
>> 
>> Hi,
>>       We are running the OFED 1.2rc4 distribution containing openmpi-1.2.2 on
>> a RedhatEL4U4 system with Scyld Clusterware 4.1. The hardware configuration
>> consists of a DELL 2950 as the headnode and 3 DELL 1950 blades as compute
>> nodes using Cisco TopSpin Infiband HCAs and switches for the interconnect.
>> 
>>       When we use 'mpirun' from the OFED/Open MPI distribution to start
>> processes on the compute nodes, everything works correctly. However, when we
>> try to start processes on the head node, the processes appear to run
>> correctly but 'mpirun' hangs and does not terminate until killed. The
>> attached 'run1.tgz' file contains detailed information from running the
>> following command:
>> 
>>      mpirun --hostfile hostfile1 --np 1 --byslot --debug-daemons -d hostname
>> 
>> where 'hostfile1' contains the following:
>> 
>> -1 slots=2 max_slots=2
>> 
>> The 'run.log' is the output of the above line. The 'strace.out.0' is the
>> result of 'strace -f' on the mpirun process (and the 'hostname' child process
>> since mpirun simply forks the local processes). The child process (pid 23415
>> in this case) runs to completion and exits successfully. The parent process
>> (mpirun) doesn't appear to recognize that the child has completed and hangs
>> until killed (with a ^c).
>> 
>> Additionally, when we run a set of processes which span the headnode and the
>> compute nodes, the processes on the head node complete successfully, but the
>> processes on the compute nodes do not appear to start. mpirun again appears
>> to hang.
>> 
>> Do I have a configuration error or is there a problem that I have
>> encountered? Thank you in advance for your assistance or suggestions
>> 
>> Sean
>> 
>> ------
>> Sean M. Kelley
>> sean.kel...@solers.com
>> 
>>  
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Reply via email to