My apologies, Manal ­ I had a slight error on the command line I gave you.
It should be:

mpirun ‹np XX xterm ­e gdb <myprog>

When the xterm windows pop-up, you will need to enter each of them and type

run <myargs>

To start the program. If you want gdb to use a specific directory, you can
pass the ³-d <mydir>² argument in the first command line:

mpirun ‹np XX xterm ­e gdb ­d >mydir> <myprog>

You would still need to issue the ³run² command in each xterm window.

Sorry for the error.
Ralph


On 11/10/06 7:48 PM, "Ralph Castain" <r...@lanl.gov> wrote:

> Hi Manal
> 
> No problem at all ­ happy to be of some help. I believe the command line you
> want is:
> 
> mpirun ‹np XX xterm ­e gdb <myprog> <myargs>
> 
> That will kickoff XX copies of xterm, each running gdb of your program in it.
> We use that command ourselves quite often to help debug the system. Gdb should
> let you switch between threads on each application.
> 
> Hope that is of help
> Ralph
> 
> 
> 
> On 11/10/06 7:23 PM, "Manal Helal" <manalor...@gmail.com> wrote:
> 
>> Hi Ralph
>>  
>> sorry about this, I understood that -d should make the output directory the
>> xterm, but my expectation, was to have separate xterms for each running
>> process that I can debug! am I completely off-track?
>>  
>> where I can find more information about debugging multiprocess-multithreaded
>> programs using gdb? I have the -np processes created by mpirun, and then each
>> process has a number of threads running in parallel independently (some
>> semaphores are used anyway?) will I end up having different xterms for each
>> process (hopefully each thread within as well?
>>  
>> I am  totally lost in this debugging scenario, and need basic help actually
>> about what to expect?
>>  
>> thank you for your reply,
>>  
>> Best Regards, 
>> Manal
>>  
>> Date: Thu, 09 Nov 2006 21:58:57 -0700
>> From: Ralph Castain <r...@lanl.gov>
>> Subject: Re: [OMPI users] debugging problem
>> To: Open MPI Users <us...@open-mpi.org>
>> Message-ID: < c1795521.3d5%...@lanl.gov <mailto:c1795521.3d5%...@lanl.gov> >
>> Content-Type: text/plain;       charset="US-ASCII"
>> 
>> Hi Manal
>> 
>> The output you are seeing is caused by the "-d" flag you put in the mpirun
>> command line - it shows normal operation.
>> 
>> Could you tell us something more about why you believe there was an error?
>> 
>> Ralph
>> 
>> 
>> 
>> On 11/9/06 9:34 PM, "Manal Helal" < manalor...@gmail.com
>> <mailto:manalor...@gmail.com> > wrote:
>> 
>>> > Hi
>>> >
>>> > I am trying to run the following command:
>>> >
>>> >   mpirun -np XX -d xterm -e gdb <myprog> <myargs>
>>> >
>>> >
>>> > and I am receiving these errors:
>>> >
>>> > *****************
>>> >   [leo01:02141] [0,0,0] setting up session dir with
>>> > [leo01:02141]   universe default-universe
>>> > [leo01:02141]   user mhelal
>>> > [leo01:02141]   host leo01
>>> > [leo01:02141]   jobid 0
>>> > [leo01:02141]   procid 0
>>> > [leo01:02141] procdir:
>>> > /tmp/openmpi-sessions-mhelal@leo01_0/default-universe/0/0
>>> > [leo01:02141] jobdir:
>>> > /tmp/openmpi-sessions-mhelal@leo01 _0/default-universe/0
>>> > [leo01:02141] unidir:
>>> > /tmp/openmpi-sessions-mhelal@leo01_0/default-universe
>>> > [leo01:02141] top: openmpi-sessions-mhelal@leo01_0
>>> > [leo01:02141] tmp: /tmp
>>> > [leo01:02141] [0,0,0] contact_file
>>> > /tmp/openmpi-sessions-mhelal@leo01_0/default- universe/universe-setup.txt
>>> > [leo01:02141] [0,0,0] wrote setup file
>>> > [leo01:02141] pls:rsh: local csh: 0, local bash: 1
>>> > [leo01:02141] pls:rsh: assuming same remote shell as local shell
>>> > [leo01:02141] pls:rsh: remote csh: 0, remote bash: 1
>>> > [leo01:02141] pls:rsh: final template argv:
>>> > [leo01:02141] pls:rsh:     /usr/bin/ssh <template> orted --debug
>>> > --bootproxy 1 - -name <template> --num_procs 2 --vpid_start 0 --nodename
>>> > <template> --universe m helal@leo01:default-universe --nsreplica
>>> > "0.0.0;tcp://129.94.242.77:40738" --gpr replica
>>> > "0.0.0;tcp://129.94.242.77:40738" --mpi-call-yield 0
>>> > [leo01:02141] pls:rsh: launching on node localhost
>>> > [leo01:02141] pls:rsh: oversubscribed -- setting mpi_yield_when_idle to 1
>>> > (1 4)
>>> > [leo01:02141] pls:rsh: localhost is a LOCAL node
>>> > [leo01:02141] pls:rsh: changing to directory /import/eno/1/mhelal
>>> > [leo01:02141] pls:rsh: executing: orted --debug --bootproxy 1 --name 0.0.1
>>> > --num _procs 2 --vpid_start 0 --nodename localhost --universe
>>> > mhelal@leo01:default-uni verse --nsreplica
>>> > "0.0.0 ;tcp://129.94.242.77:40738" --gprreplica "0.0.0;tcp://12
>>> > 9.94.242.77:40738 <http://9.94.242.77:40738/> " --mpi-call-yield 1
>>> > [leo01:02143] [0,0,1] setting up session dir with
>>> > [leo01:02143]   universe default-universe
>>> > [leo01:02143]   user mhelal
>>> > [leo01:02143]   host localhost
>>> > [leo01:02143]   jobid 0
>>> > [leo01:02143]   procid 1
>>> > [leo01:02143] procdir:
>>> > /tmp/openmpi-sessions-mhelal@localhost_0/default-universe /0/1
>>> > [leo01:02143] jobdir:
>>> > /tmp/openmpi-sessions-mhelal@localhost_0/default-universe/ 0
>>> > [leo01:02143] unidir:
>>> > /tmp/openmpi-sessions-mhelal@localhost_0/default-universe
>>> > [leo01:02143] top: openmpi-sessions-mhelal@localhost_0
>>> > [leo01:02143] tmp: /tmp
>>> > [leo01:02143] sess_dir_finalize: proc session dir not empty - leaving
>>> > [leo01:02143] sess_dir_finalize: proc session dir not empty - leaving
>>> > [leo01:02143] sess_dir_finalize: proc session dir not empty - leaving
>>> > [leo01:02143] sess_dir_finalize: proc session dir not empty - leaving
>>> > [leo01:02143] orted: job_state_callback(jobid = 1, state =
>>> > ORTE_PROC_STATE_TERMI NATED)
>>> > [leo01:02143] sess_dir_finalize: job session dir not empty - leaving
>>> > [leo01:02143] sess_dir_finalize: found proc session dir empty - deleting
>>> > [leo01:02143] sess_dir_finalize: found job session dir empty - deleting
>>> > [leo01:02143] sess_dir_finalize: found univ session dir empty - deleting
>>> > [leo01:02143] sess_dir_finalize: found top session dir empty - deleting
>>> >
>>> > ****************
>>> >
>>> > Will you please have a look, and advise if possible where I could change
>>> > these paths, when I checked the paths, it was not there all
>>> >
>>> > Best Regards,
>>> >
>>> > Manal
>>> > _______________________________________________
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
> 


Reply via email to