On 26 Jul 2011, at 19:59, Jack Bryan wrote:
> Any help is appreciated.
Your best option is to distill this down to a short example program which shows
what's happening v's what you think should be happening.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspec
ry the internal
data structures it uses were corrupt.
> In valgrind,
>
> there are some invalid read and write butno errors about this
> free(): invalid next size .
You need to fix the invalid write errors, the above error is almost certainly a
symptom is these.
Ashley.
--
As
proof-read it before you go
public, I have many thousands of hours of EC2 time to my name and have spent
much of it configuring and testing MPI librarys within them to allow me to test
my debugger which sits on top of them.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ot;. For EC2 this would mean all in the same region.
As you correctly notice not only are your hosts are on the same network which
means that they won't all be able to contact each other over the network,
without this OpenMPI is not going to be able to work.
Ashley.
--
Ashley P
o do it permanently from
the next boot, obviously you should check with your network administrator
before doing this.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ell as for
> my own record. Why would you say I shouldn't be doing so?
>
> Regards,
>
> Tena
>
>
> On 2/13/11 1:29 PM, "Ashley Pittman" wrote:
>
>> On 12 Feb 2011, at 14:06, Ralph Castain wrote:
>>
>>> Have you searched the email archi
r volume with other
people.
Ashley.
Ps, I would recommend reading up on sudo and su, "sudo su" is not a command you
should be typing.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
en using the
--prefix option to mpirun or configuring OpenMPI with the
--enable-mpirun-prefix-by-default option.
See:
http://www.open-mpi.org/faq/?category=running#run-prereqs
http://www.open-mpi.org/faq/?category=running#mpirun-prefix
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspect
cast and the other collective operations are just that, "collective" and
have to be called from all ranks in a communicator with the same parameters and
in the same order.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
can't see any difference between the two.
Ashley.
On 10 Dec 2010, at 18:25, Ralph Castain wrote:
>
> So if you wanted to get your own local rank, you would call:
>
> my_local_rank = orte_ess.proc_get_local_rank(ORTE_PROC_MY_NAME);
--
Ashley Pittman, Bath, UK.
Padb - A parall
n and what you can expect it to do.
http://www.open-mpi.org/faq/?category=running#force-aggressive-degraded
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
show you the same
> > > > type of information, it won't allow you to point-and-click through the
> > > > source or single step through the code but it is lightweight and will
> > > > show you the information which you need to know.
> > > >
nd work through the problems
> > but only if you work with me and provide details of the integration, in
> > particular I've sent you a version which has a small patch and some debug
> > printfs added, if you could send me the output from this I'd be able to
> >
k through the problems but only if you
work with me and provide details of the integration, in particular I've sent
you a version which has a small patch and some debug printfs added, if you
could send me the output from this I'd be able to tell you if it was likely to
work and how to go
3.2 installed.
padb -Ormgr=pbs -Q
Or - find the node where the PBS script is being executed, check that the
ompi-ps command is returning the jobid and then run
padb -Ormgr=orte -Q
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
any help run "padb -axt" for the stack traces and send the output to this
list.
The web-site is in my signature or there is a new beta release out this week at
http://padb.googlecode.com/files/padb-3.2-beta1.tar.gz
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ftware, if this isn't the case for your systems then the easiest way might be
to compile open mpi from source (on the older of the two machines would be
best) and to install it to a common directory on both machines.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool f
he worker nodes.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
do the marshalling of data and all
ranks would be started simultaneously, you'll find this easier than having one
single-rank job spawn more ranks as required.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
nodes under any programming paradigm.
However if you mean "execution threads" or in MPI parlance "ranks" then yes,
under OpenMPI each "rank" will be a separate process on one of the nodes in the
host list, as Jody says look at MPI_Comm_Spawn for this.
Ashley,
--
A
s there is but I'm not familiar enough with OMPI to be able to tell you, I'm
sure somebody can though. If my suspicion above is correct I have doubt
knowing what this value is would help you at all though in terms of application
performance.
Ashley.
--
Ashley Pittman, Bat
was a receive queue of tens of
thousands of messages, in this case each and every receive performs slowly
unless the descriptor is near the front of the queue so the concern is not
purely about memory usage at individual processes although that can also be a
factor.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ething I've long
advocated and have implemented and played around with in the past however it's
not yet available to users today but I believe it will be shortly and as you'll
have read my believe is it's going to be a very useful addition to the MPI
offering.
Ashley,
--
Ashl
in if used properly, in the good case it doesn't cause any process to block
ever so the cost is only that of the CPU cycles the code takes itself, in the
bad case where it has to delay a rank then this tends to have a positive impact
on performance.
> Would it be application/communicator
waiting for it on N+25 and immediately starting
another one, again waiting for it 25 steps later.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ronous barrier could help, in
theory it should have the effect of being able keep processes in sync without
any additional overhead in the case that they are already well synchronised.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
easiest might be to had a single rank receive all
messages and keep them in a queue and then use MPI_Ssend() to forward messages
to your "consumer" ranks. Substitute ranks for threads in the above text as
you feel is appropriate.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ale as well as the orte integration (pdsh runs out of file
descriptors eventually) but is more generic and might get you to somewhere that
works. If your job spans more than 32 nodes you may need to set the FANOUT
variable for pdsh to work.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
out (connecting)
> Unexpected EOF from Inner stderr (connecting)
> Unexpected exit from parallel command (state=connecting)
> Bad exit code from parallel command (exit_code=131)
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
and is open-source
Ashley (padb developer)
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
e handle and making it a fully fledged release so you
should try this to see if it makes a difference to your problem. The website
for padb (containing links to it's own mailing lists) is in my signature.
Ashley (the padb developer)
--
Ashley Pittman, Bath, UK.
Padb -
o a better
algorithm gets picked by the library.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
narrow in on problems quickly.
Also which is unique to MPI it's possible to see the "message queues" for ranks
within an MPI application which can help with programming.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
t the very end when an rpm-tmp
> file is executed, but that file has disappeared so I don't really know what
> it does. I thought it might be apparent in the spec file, but it's certainly
> not apparent to me! Any help or advice would be appreciated.
--
Ashley Pittm
ware and the same OS. All floating point
operations by the MPI library are expected to be deterministic but changing the
process layout or and MPI settings can affect this and of course anything the
application does can introduce differences as well.
Ashley.
--
Ashley Pittman, Bath, UK.
Pa
es.html#mpi-queue
> Thanks, this would be really useful for jobs that only hang randomly or after
> very long runtimes.
You're right, for example it's used to good effect in the open-mpi automated
testing as well as at numerous other sites from the large to the small.
Ashley.
--
A
to the cause I've no idea, I've only seen it once or twice in the last six
months and not on installations I've installed myself, I've never been able to
find out the underlying cause and why some machines report this error and some
don't.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
the debug info but rather use fixed sizes and offsets:
http://code.google.com/p/padb/source/detail?r=355
Verify the type information if present:
http://code.google.com/p/padb/source/detail?r=386
> However,
> some users prefer the classic launch with -tv, and this seems to be failing
> wi
On 11 Jan 2010, at 06:20, Jed Brown wrote:
> On Sun, 10 Jan 2010 19:29:18 +0000, Ashley Pittman
> wrote:
>> It'll show you parallel stack traces but won't let you single step for
>> example.
>
> Two lightweight options if you want stepping, breakpoints, wat
believe Eclipse has some support for parallel programs,
I've not used it however so can't comment on it's features.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
nodes.
No prior configuration or starting of deamons is required. No effort is
made to prevent multiple jobs from starting on the same nodes and no
effort is made to maintain a "queue" of jobs waiting for nodes to become
free. Each job is independent, and runs where you tell it to
im
27;ll need to use the SVN version of padb for this, the "orte-job-step"
option tells it to attach to the first spawned job, use orte-ps to see
the list of job steps.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
houldn't this be MYARGS=$@ It'll change the way quoted args are
forwarded to the parallel job.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
and see these queues. That I know of there are three
tools which use this, either TotalView, DDT or my own tool, padb.
TotalView and DDT are both full-featured graphical debuggers and
commercial products, padb is a open-source text based tool.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A pa
consume 100% of
idle CPU time but then other programs want to use the CPU the MPI
processes will not hog it but rather let the other processes use as much
CPU time as they want and just spin when the CPU would otherwise be
idle. This is something I use daily and greatly increases the
responsiveness of
k about binding and affinity won't help either, process
binding is about squeezing the last 15% of performance out of a system
and making performance reproducible, it has no bearing on correctness or
scalability. If you're not running on a dedicated machine which with
firefox running I guess
On Wed, 2009-12-02 at 13:11 -0500, Brock Palen wrote:
> On Dec 1, 2009, at 11:15 AM, Ashley Pittman wrote:
> > On Tue, 2009-12-01 at 10:46 -0500, Brock Palen wrote:
> >> The attached code, is an example where openmpi/1.3.2 will lock up, if
> >> ran on 48 cores, of IB (
Hopefully this information would be useful.
http://padb.pittman.org.uk/full-report.html
Ashley Pittman.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
es for messages from the OOM killer?
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ing hangs in a parallel job take a look at the tool
linked to below (padb), it should be able to give you a parallel stack
trace and the message queues for the job.
http://padb.pittman.org.uk/full-report.html
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for
zeof(double) so I wouldn't rule out this theory.
Also, you are mallocing at least 4Gb per process and quite possibly a
large amount for buffering in the MPI library as well, it could be that
you are simply running out of memory.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A par
ble feature for any software and particularly a library IMHO.
https://svn.open-mpi.org/trac/ompi/ticket/1720
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
rarely a good
solution.
Ashley.
On Wed, 2009-10-07 at 18:42 +0300, Roman Cheplyaka wrote:
> As a slight modification, you can write a wrapper script
>
> #!/bin/sh
> my_exe < inputs.txt
>
> and pass it to mpirun.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspecti
and
that executable is then executed locally.
> Is the implication correct or is there some way around.
Typically some kind of a shared filesystem would be used, nfs for
example.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster co
ou need
which should be fast in all cases.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ording to the manpage.
> posix_memalign is the one to use.
> > > https://svn.open-mpi.org/trac/ompi/changeset/21744
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ld also suggest that as you are seeing random hangs and crashes
running your code under Valgrind might be advantageous.
Ashley Pittman.
On Sun, 2009-09-27 at 02:05 +0800, guosong wrote:
> Yes, I know there should be a bug. But I do not know where and why.
> The strange thing was sometimes
he program did start and has really hung then you can get more
in-depth information about it using padb which is linked to in my
signature.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
fairly
trivially however.
The problem being Embarrassingly parallel is of no consequence beyond
the fact that if it was they you wouldn't need either MPI or MapReduce.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
al limitation of classical computing and
one that people have learned to live with.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
probably need
to check with the local admins for a definitive answer.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ommand is running
will tell you the hostname where every rank is running or if you want
more information (load, cpu usage etc) you can use padb, the link for
which is in my signature.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
On Wed, 2009-07-08 at 15:43 -0400, Michael Di Domenico wrote:
> On Wed, Jul 8, 2009 at 3:33 PM, Ashley Pittman wrote:
> >> When i run tping i get:
> >> ELAN_EXCEOPTIOn @ --: 6 (Initialization error)
> >> elan_init: Can't get capability from environment
> >
On Wed, 2009-07-08 at 15:09 -0400, Michael Di Domenico wrote:
> On Wed, Jul 8, 2009 at 12:33 PM, Ashley Pittman wrote:
> > Is the machine configured correctly to allow non OpenMPI QsNet programs
> > to run, for example tping?
> >
> > Which resource manager are you runn
the list its a good place
> to start
Is the machine configured correctly to allow non OpenMPI QsNet programs
to run, for example tping?
Which resource manager are you running, I think slurm compiled for RMS
is essential.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
code faster if you run the patches.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ot able to remove it. I was just trying to outrun it by setting the
> > $PATH variable to point first at my local installation.
> >
> >
> > Catalin
> >
> >
> > --
> >
> > **
> > Catalin David
> > B.Sc. Computer Science 2010
> > Jacobs University Bremen
> >
> > Phone: +49-(0)1577-49-38-667
> >
> > College Ring 4, #343
> > Bremen, 28759
> > Germany
> > **
> >
>
>
>
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
require it but I don't have access to hardware either currently.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
As the
error is from MPI_Init() you can safely ignore it from a end-user
perspective.
Ashley.
--
Ashley Pittman
Padb - A parallel job viewer for cluster computing
http://padb.pittman.org.uk
t() so it's possible for memory to be allocated on the
wrong quad, the discussion was about moving the binding to the orte
process as I recall?
>From my testing of process affinity you tend to get much more consistent
results with it on and much more unpredictable results with it off, I'd
questing that it's working properly if you are seeing a 88-93% range in
the results.
Ashley Pittman.
On Tue, 2009-05-19 at 14:01 -0400, Noam Bernstein wrote:
I'm glad you got to the bottom of it.
> With one of them, apparently, CP2K will silently go on if
> the
> file is missing, but then lock up in an MPI call (maybe it leaves
> some
> variables uninitialized, and then uses them in the call
On Tue, 2009-05-19 at 11:01 -0400, Noam Bernstein wrote:
> I'd suspect the filesystem too, except that it's hung up in an MPI
> call. As I said
> before, the whole thing is bizarre. It doesn't matter where the
> executable is,
> just what CWD is (i.e. I can do mpirun /scratch/exec or mpirun
On Mon, 2009-05-18 at 17:05 -0400, Noam Bernstein wrote:
> The code is complicated, the input files are big and lead to long
> computation
> times, so I don't think I'll be able to make a simple test case.
> Instead
> I attached to the hanging processes (all 8 of them) with gdb
> during the h
rs.
I've always used the compile options to specify max message size and rep
count, the -msglen option is not one I've seen before.
Ashley Pittman.
On Wed, 2009-04-22 at 12:40 +0530, vkm wrote:
> The same amount of memory required for recvbuf. So at the least each
> node should have 36GB of memory.
>
> Am I calculating right ? Please correct.
Your calculation looks correct, the conclusion is slightly wrong
however. The Application buffers
ollecting "all done" messages depending on whether the message
> indicates a graph operation or signals "all done".
Exactly, that way you have a defined number of messages which can be
calculated locally for each process and hence there is no need to use
Probe and you can get rid of the MPI_Barrier call.
Ashley Pittman.
On 23 Mar 2009, at 23:36, Shaun Jackman wrote:
loop {
MPI_Ibsend (for every edge of every leaf node)
MPI_barrier
MPI_Iprobe/MPI_Recv (until no messages pending)
MPI_Allreduce (number of nodes removed)
} until (no nodes removed by any node)
Previously, I attempted to use a single MPI_Allreduce w
On 23 Mar 2009, at 21:11, Ralph Castain wrote:
Just one point to emphasize - Eugene said it, but many times people
don't fully grasp the implication.
On an MPI_Allreduce, the algorithm requires that all processes -enter-
the call before anyone can exit.
It does -not- require that they all e
ier will still bring
it back in step.
Another interesting challenge is to benchmark MPI_Barrier, it's not as
easy as you might think...
Ashley Pittman.
On 12 Feb 2009, at 15:53, Reuben D. Budiardja wrote:
Hello,
I am having problem that if a program is compiled with OpenMPI,
Valgrind
doesn't work correctly, i.e: it does not show the memory leak like it
supposed too. The same test program compiled with regular "gfortran"
and run
under Valgri
On 11 Feb 2009, at 14:13, Prentice Bisbal wrote:
Douglas Guptill wrote:
Thanks. I did end up building for all the compilers under separate
trees. It looks like the --exec-prefix option is only of use if your
compiling 32-bit and 64-bit versions using the same compiler.
This is what I decided
ather, AllReduce and AlltoAll also have an implicit barrier by
virtue of the dataflow required, all processes need input from all other
processes before they can return.
Ashley Pittman.
On Mon, 2009-01-19 at 12:50 +0530, gaurav gupta wrote:
> Hello,
>
> I want to know that which task is running on which node. Is there any
> way to know this.
>From where? From the command line outside of a running job then the new
open-ps command in v1.3 will give you this information. In 1.2
On Sat, 2008-10-18 at 00:16 +0900, Raymond Wan wrote:
>
> Is there a package that I neglected to install? I did an "aptitude
> search openmpi" and installed everything listed... :-) Or perhaps I
> haven't removed all trace of mpich?
According to packages.debian.org there isn't a openmpi paca
On Wed, 2008-10-08 at 09:46 -0400, Jeff Squyres wrote:
> - Have you tried compiling Open MPI with something other than GCC?
> Just this week, we've gotten some reports from an OMPI member that
> they are sometimes seeing *huge* performance differences with OMPI
> compiled with GCC vs. any ot
On Sat, 2008-08-16 at 08:03 -0400, Jeff Squyres wrote:
> - large all to all operations are very stressful on the network, even
> if you have very low latency / high bandwidth networking such as DDR IB
>
> - if you only have 1 IB HCA in a machine with 8 cores, the problem
> becomes even more di
One tip is to use the --log-file=valgrind.out.%
q{OMPI_MCA_ns_nds_vpid} option to valgrind which will name the output
file according to rank. In the 1.3 series the variable has changed from
OMPI_MCA_ns_nds_vpid to OMPI_COMM_WORLD_RANK.
Ashley.
On Tue, 2008-08-05 at 17:51 +0200, George Bosilca w
On Wed, 2008-07-30 at 10:45 -0700, Scott Beardsley wrote:
> I'm attempting to move to OpenMPI from another MPICH-derived
> implementation. I compiled openmpi 1.2.6 using the following configure:
>
> ./configure --build=x86_64-redhat-linux-gnu
> --host=x86_64-redhat-linux-gnu --target=x86_64-redh
On Sun, 2008-07-13 at 09:16 -0400, Jeff Squyres wrote:
> On Jul 13, 2008, at 9:11 AM, Tom Riddle wrote:
>
> > Does anyone know if this feature has been incorporated yet? I did a
> > ./configure --help but do not see the enable-ptmalloc2-internal
> > option.
> >
> > - The ptmalloc2 memory manager
() functions in
the libopen-pal library is preventing valgrind from intercepting these
functions in glibc and hence dramatically reducing the benefit which
valgrind brings.
Ashley Pittman.
the node is given local_rank=0
>
> If there are others that would be useful, now is definitely the time to
> speak up!
The only other one I'd like to see is some kind of global identifier for
the job but as far as I can see I don't believe that openmpi has such a
concept.
Ashley Pittman.
On Fri, 2008-07-11 at 07:59 -0600, Ralph H Castain wrote:
> Not until next week's meeting, but I would guess we would simply prepend the
> rank. The issue will be how often to tag the output since we write it in
> fragments to avoid blocking - so do we tag the fragment, look for newlines
> and tag
On Fri, 2008-07-11 at 07:42 -0600, Ralph H Castain wrote:
>
>
> On 7/11/08 7:32 AM, "Ashley Pittman"
> wrote:
>
> > On Fri, 2008-07-11 at 07:20 -0600, Ralph H Castain wrote:
> >> This variable is only for internal use and has no applicability to a user.
On Fri, 2008-07-11 at 07:20 -0600, Ralph H Castain wrote:
> This variable is only for internal use and has no applicability to a user.
> Basically, it is used by the local daemon to tell an application process its
> rank when launched.
>
> Note that it disappears in v1.3...so I wouldn't recommend
l.
> ==17839== Using LibVEX rev 1658, a library for dynamic binary
> translation.
> ==17839== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP.
> ==17839== Using valgrind-3.2.1, a dynamic binary instrumentation
> framework.
> ==17839== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et
> al.
> ==17839== For more details, rerun with: -v
Ashley Pittman.
rised
mpirun doesn't have an option for this actually, it's a fairly common
thing to want.
Ashley Pittman.
#!/bin/sh
$@ | sed "s/^/\[rk:$OMPI_MCA_ns_nds_vpid,sz:$OMPI_MCA_ns_nds_num_procs
\]/"
On Tue, 2008-06-24 at 11:06 -0400, Mark Dobossy wrote:
> Lately I have be
_PATH is set correctly
in the shell which is launching the program.
Ashley Pittman.
sccomp@demo4-sles-10-1-fe:~/benchmarks/IMB_3.0/src> mpirun -H comp00,comp01
./IMB-MPI1
/opt/openmpi-1.2.6/intel/bin/orted: error while loading shared libraries:
libimf.so: cannot open shared object file: No suc
refix option to mpirun.
Or do you mean static linking of the tools? I could go for that if
there is a configure option for it.
Ashley Pittman.
On Mon, 2008-06-09 at 08:27 -0700, Doug Reeder wrote:
> Ashley,
>
> It could work but I think you would be better off to try and
> statical
onent with icc?
Yours,
Ashley Pittman,
I notice on the download page all file sizes are listed as 0KB, this is
presumably an error somewhere.
http://www.open-mpi.org/software/ompi/v1.2/
Ashley,
1 - 100 of 107 matches
Mail list logo