For anyone following this thread:

I have completed the IOF options discussed below. Specifically, I have added the following:

* a new "timestamp-output" option that timestamp's each line of output

* a new "output-filename" option that redirects each proc's output to a separate rank-named file.

* a new "xterm" option that redirects the output of the specified ranks to a separate xterm window.

You can obtain a copy of the updated code at:

http://www.open-mpi.org/nightly/trunk/openmpi-1.4a1r20392.tar.gz

If you install this, do a "man mpirun" to see a detailed explanation of how to use these options.

Any feedback you can provide on them would be most appreciated! If it looks okay and proves useful, I'll try to get it included in an upcoming release as soon as possible.

Thanks
Ralph


On Jan 26, 2009, at 1:39 PM, jody wrote:

That's cool then - i have written a shellscript
which automatically does the xhost stuff for all
nodes in my hostfile :)

On Mon, Jan 26, 2009 at 9:25 PM, Ralph Castain <r...@lanl.gov> wrote:

On Jan 26, 2009, at 1:20 PM, jody wrote:

Hi Brian


I would rather not have mpirun doing an xhost command - I think that is beyond our comfort zone. Frankly, if someone wants to do this, it is up
to
them to have things properly setup on their machine - as a rule, we don't
mess with your machine's configuration. Makes sys admins upset :-)

So what you mean is that the user must do the xhost before using the
xceren feature?
If not, how else can i have xterms from another machine display locally?

That is correct. I don't think that is -too- odious a requirement - I'm just not comfortable modifying access controls from within OMPI since xhost
persists after OMPI is done with the job.



However, I can check to ensure that the DISPLAY value is locally set and automatically export it for you (so you don't have to do the -x DISPLAY option). What I have done is provided a param whereby you can tell us
what
command to use to generate the new screen, with it defaulting to "xterm
-e".
I also allow you to specify which ranks you want displayed this way - you
can specify "all" by giving it a "-1".

Cool!

Will hopefully have this done today or tomorrow. It will be in the OMPI trunk repo for now. I'll send out a note pointing to it so people can
check
all these options out - I would really appreciate the help to ensure
things
are working across as many platforms as possible before we put it in the
official release!

I'll be happy to test these new features!

Jody

Hi
I have written some shell scripts which ease the output
to an xterm for each processor for normal execution(run_sh.sh),
gdb (run_gdb.sh), and valgrind (run_vg.sh).

In order for the xterms to be shown on your machine,
you have to set the DISPLAY variable on every host
(if this is not done by ssh)
export DISPLAY=myhost:0.0

on myhost you may have to allow access:
do
xhost +<host-name>
for each machine in your hostfile.

Then start
mpirun -np 12 -x DISPLAY run_gdb.sh myApp arg1 arg2 arg3

I've attached these little scripts to this mail.
Feel free to use them.

I've started working on my "complicated" way, i.e.
wrappers redirecting output via sockets to a server.

Jody

On Sun, Jan 25, 2009 at 1:20 PM, Ralph Castain <r...@lanl.gov> wrote:

For those of you following this thread:

I have been impressed by the various methods used to grab the output
from
processes. Since this is clearly something of interest to a broad
audience,
I would like to try and make this easier to do by adding some options
to
mpirun. Coming in 1.3.1 will be --tag-output, which will automatically
tag
each line of output with the rank of the process - this was already in
the
works, but obviously doesn't meet the needs expressed here.

I have done some prelim work on a couple of options based on this
thread:

1. spawn a screen and redirect process output to it, with the ability
to
request separate screens for each specified rank. Obviously, specifying
all
ranks would be the equivalent of replacing "my_app" on the mpirun cmd
line
with "xterm my_app". However, there are cases where you only need to
see
the
output from a subset of the ranks, and that is the intent of this
option.

2. redirect output of specified processes to files using the provided filename appended with ".rank". You can do this for all ranks, or a
specified subset of them.

3. timestamp output

Is there anything else people would like to see?

It is also possible to write a dedicated app such as Jody described,
but
that is outside my purview for now due to priorities. However, I can
provide
technical advice to such an effort, so feel free to ask.

Ralph


On Jan 23, 2009, at 12:19 PM, Gijsbert Wiesenekker wrote:

jody wrote:

Hi
I have a small cluster consisting of 9 computers (8x2 CPUs, 1x4
CPUs).
I would like to be able to observe the output of the processes
separately during an mpirun.

What i currently do is to apply the mpirun to a shell script which
opens a xterm for each process,
which then starts the actual application.

This works, but is a bit complicated, e.g. finding the window you're
interested in among 19 others.

So i was wondering is there a possibility to capture the processes'
outputs separately, so
i can make an application in which i can switch between the different
processor outputs?
I could imagine that could be done by wrapper applications which
redirect the output over a TCP
socket to a server application.

But perhaps there is an easier way, or something like this alread
does
exist?

Thank You
Jody
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


For C I use a printf wrapper function that writes the output to a
logfile.
I derive the name of the logfile from the mpi_id. It prefixes the
lines
with
a time-stamp, so you also get some basic profile information. I can
send
you
the source code if you like.

Regards,
Gijsbert


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



< run_gdb .sh > < run_vg .sh><run_sh.sh>_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to