tart daemons on other nodes or is there a different
mechanism altogether?
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
de 24/7 support to random users around the
world (we will have our own deadlines and internal deliverables that
need to be met, for example -- and those will inevitably sometimes take
precedence over answering support e-mails). But we will do our best to
provide access, engage third party developer
e; although the MPI layer is shaping up quite
well, the run-time environment is taking a bit longer than expected --
we won't have had a chance to wring it through rigorous testing by the
end of the month.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
the set of "similar" systems out
there is pretty small: every cluster is different. Every one. There
are very, very few cookie-cutter clusters out there that can truly be
called "identical" to other clusters. As such, even expecting serial
binaries to be portable is quite a stretch.
To be blunt: an MPI ABI and/or an uber-mpirun will not solve any of
these other issues.
My $0.02: source code portability is enough. This was actually quite
wise of the MPI Forum; specifying mpi.h and/or making an ABI was never
part of the plan. Any valid MPI application can be recompiled for
other target systems. Indeed, properly engineered parallel
applications may only need to recompile small portions of their code
base to use a different MPI implementations. And with a little effort,
apps can be made to be MPI-independant (which is a lot less work than
getting all MPI implementations to agree to an ABI / uber-mpirun).
Sure, it would be great to not have to recompile apps, but given the
current state of technology, the sheer number of MPI implementations
that would have to agree to make an MPI ABI useful, and the fundamental
differences in goals between the different MPI implementation, it's
hard to justify all the work that would be required for this effort --
just to avoid a simple thing like recompiling.
Thanks for your time in reading this.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
s seems like a perfect project for a bright Master's student.
Anyone care to open up a SourceForge project for it? :-)
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
Fortran;
no internals need change in any implementation.
That is an extremely inaccurate statement. :-)
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
want to use an ABI will use it. Those who
do *not* want an ABI do not have to have it forced upon them.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
mplementations need to change
- no software engineering issues or practices for existing MPI
implementations need to change
- people who want it will use it (and those who don't, won't)
Are you trying to jump start MPI-3?
-
Sidenote: I'll try to keep up with this conve
tter chance of going forward.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
27;m kidding, of course. Open MPI's first releases will have all of
MPI-2 except the one-sided functionality (this was the plan from the
beginning, actually -- there was enough else to do and not much demand
for the one-sided stuff to justify putting it in the first versions).
What featu
n MPI.
"Show us the code!"
I have a long, public track record of high-quality open source
software, and am firmly committed to make Open MPI be in the same
category.
We will show you the code soon, I promise. We've come too far to *not*
do so! :-)
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
more transparent development
cycle...there's not even a devel or commit mailing list.
Looking forward to the release.
Philip
Please adopt a release-early, release-often strategy.
Could not agree more!
"Show us the code!"
-scott
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
y reason why this
can't happen for later releases. :-)
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
On Jun 15, 2005, at 6:07 PM, Jeff Squyres wrote:
bin/ompi_info presents an opportunity to help all us shlubs that
have to do gnu build systems.
BTW, I forgot to mention -- try running "ompi_info -all" and/or
"ompi_info -all -parsable". It is explicitly aimed at those wh
e following
reasons:
- this is the way that I've seen most Autoconf-enabled build systems
work
- if people want to use absolute names for compilers, they can
- those who don't want absolute names aren't forced to (there's many an
installation out there that only has the C bindings and don't give a
whit about C++ or Fortran)
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
27;s. As
other mails today have indicated, OMPI has fully functional wrapper
compilers (mpicc, mpiCC, mpif77, mpif90) and an ompi_info command
(analogous to, but greatly superseding LAM's laminfo command).
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ost people do not even upgrade to the latest
release, let alone dive in and test out alpha code.
Thanks,
Matt
Just a brief response on two points (lest the 'insiders' think
there are no sympathetic outsiders...).
On Wed, Jun 15, 2005 at 01:09:27PM -0400
I've attached our current LICENSE file for reference. It may change
slightly in the future to simplify the licensing because there are
several different copyright holders.
Hope this helps clarify even more issues! :-)
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
LICENSE
Description: Binary data
ons. Throwing a random
compiler (or worse, the primitive size-changing fortran switches)
into a compiler wrapper's path is just asking for trouble.
We need good config info to diagnose these kinds of user idiocy
efficiently.
Yep -- that's exactly the purpose of ompi_info (we've been requesting
the full output of laminfo along with bug reports for the past few
years, and it's been incredibly helpful). Your point about adding the
absolute compiler path is a good step in the right direction for this
functionality.
Any other suggestions?
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
tc. There's more esoteric configure flags that allow
building some components as DSOs and others statically linked into
libmpi, but most people want entirely one way or the other, so I won't
provide the [uninteresting] details here.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
one's cluster/development systems seem
to be "one off" from standard Linux distros. Do you want a binary
RPM(s)? If so, for which distros? (this is one area where vendors
tend to have dramatically different views than academics/researchers)
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
nt doc location/appendix
as soon as possible if it isn't there already.
Details like this matter a lot to a few of us,
and many of us haven't drunk completely the 3000 gallons of twisted
logic
that is the autotool conventions.
thanks,
ben
On Thu, Jun 16, 2005 at 08:44:48PM -0400, Jeff Squ
over the MPI itself). You might want to google around and see
what you come up with.
Subject to a little bit rot, they may actually "just work" if they used
the wrapper compilers to compile themselves, etc...?
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ER*4 would be a poor choice for MPI opaque objects in calling on
some MPI implementations.
As long as your code can handle the transfer from your type back and
forth to an INTEGER, you should be ok.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
loopback)
and "mvapi" BTLs, and no others.
Try this and see if you get better results.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ot;), but it's a
goal to have a single, top-level --verbose switch that is effectively
an alias for most/all of them.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ry to run PMB.
Joachim
--
Joachim Worringen - NEC C&C research lab St.Augustin
fon +49-2241-9252.20 - fax .99 - http://www.ccrl-nece.de
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ritten with MPI-2 I/O functions. Is this a file that
Overflow itself generates?
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
time = 0.010952
Am I specifying this wrong?
I'll take a look.
Thanks,
Tim
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
_ALLOC_MEM / MPI_FREE_MEM (don't quote me on that, though).
But then again, there doesn't seem to be a good reason to say "out of
memory" when in MPI_FREE_MEM. :-) So if we could replicate the
problem, that would probably be the most helpful -- how would we obtain
this sof
e
./buildpackage.sh: line 259: /disk.out: Permission denied
*** Failed building pax file. Aborting!
*** Check /disk.out for information
The output file disk.out apparently does not exist; I could not find
any file called disk.out on my machine.
-Ken______
tek8169 ethernet cards. Can any one tell me why the
performance with open mpi is so low compared to mpich2-1.02p1?
There should clearly not be such a wide disparity in performance here;
we don't see this kind of difference in our own internal testing.
Can you send the output of "ompi_i
/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
each process to TotalView who then
launches its own debugger processes on those nodes and attaches to the
processes.
You probably get a "stopped" message when you try to bg orterun because
the shell thinks that it is waiting for input from stdin, because we
didn't close it.
Does that help?
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
> the shell thinks that it is waiting for input from stdin, because we
> > didn't close it.
> Actually this shouldn't matter. Many programs don't close stdin but
> nothing prevents them from running in background until they try to
> read input. The same "Hello world" application runs well with MPICH
> "mpirun -np 3 a.out &"
>
> Best regards,
> Konstantin.
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
at the very end of
MPI_INIT, in ompi/runtime/ompi_mpi_init.c).
> > if you're actually integrated in as a component, then you could get the
> > information directly (i.e., via API)...? The possibilities here are
> > open.
> This also sounds interesting.
This is t
.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
rg/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
erly?
Thanks,
Hugh
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
___
users mailing list
us...@open-mpi
hat needs to be
fixed before 1.0 (it's not a problem with the BTL design, since IB and
Myrinet performance is quite good -- just a problem/bug in the TCP
implementation of the BTL). That much performance degradation is
clearly unacceptable.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
kes, but unfortunately are within the
central innards of the whole implementation, making it look like the
entire implementation was wonky).
Hopefully now you'll be able to get a bit further in the tests...?
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ory leaks using
Valgrind (or similar)?
Yes. Unfortunately, we've still got some work to do there, too. :-\
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
g/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI
On Oct 28, 2005, at 3:08 PM, Jeff Squyres wrote:
1. I'm concerned about the MPI_Reduce error -- that one shouldn't be
happening at all. We have table lookups for the MPI_Op/MPI_Datatype
combinations that are supposed to work; the fact that you're getting
this error means that
t;MPI Tuning:TCP" section:
http://www.open-mpi.org/faq/?category=tcp
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
their problems, etc.
So -- bring it on!
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
other? (I'm guessing that you can because OMPI was
able to start processes) Can you ping one machine from the other?
Most importantly -- can you open arbitrary TCP ports between the two
machines? (i.e., not just well-known ports like 22 [ssh], etc.)
--
{+} Jeff Squyres
{+} The Open M
t to be sure)
4. Can you try this again with the latest v1.0 snapshot? (http://
www.open-mpi.org/nightly/v1.0/)
Thanks!
On Oct 27, 2005, at 10:19 AM, Jeff Squyres wrote:
On Oct 19, 2005, at 12:04 AM, Allan Menezes wrote:
We've done linpack runs recently w/ Infiniband, which result in
Mike --
We've been unable to reproduce this problem, but Tim just noticed
that we had a patch on the trunk from several days ago that we forgot
to apply to the v1.0 branch (Tim just applied it now).
Could you give the nightly v1.0 tarball a whirl tomorrow morning? It
should contain the p
there (dig through there list archives; this kind of stuff is covered
frequently).
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
no runaway processes still executing.
Any ideas why this is happening??
Bernie Borenstein
The Boeing Company
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
ay (see
http://sc05.supercomputing.org/schedule/event_detail.php?evid=5240).
Please feel free to stop by and join in the discussion.
We're also giving several short talks about Open MPI at the Indiana
University booth:
"Introduction and Overview of Open MPI"
Jeff Squyres (Indiana Univ
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
"Half of what I say is meaningless; but I say it so that the other
half may reach you"
Kahlil Gibran
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
wrote:
Thanks. Everything builds OK now. This was after pulling the
version from yesterday (1.0rc5r8038) and adding your file. At
least some of the PETSc tests work OK now.
Thanks,
Charles
On Nov 8, 2005, at 3:03 PM, users-requ...@open-mpi.org wrote:
Message: 3
Date: Tue, 8 Nov 2005 15:03:24 -
caldomain:20659] sess_dir_finalize: top session dir not
empty -
leaving
[humphrey@zelda01 humphrey]$
end non-hanging invocation --
Any thoughts?
-- Marty
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
Behalf Of Jeff Squyres
Se
u
Research Assistant
School of Computer Science & Software Engineering
Monash University, Caulfield Campus
Ph: 61 3 9903 1964
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
/www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
On Nov 10, 2005, at 9:05 AM, Clement Chu wrote:
I have tried the latest version (rc5 8053), but the error is still
here.
Jeff Squyres wrote:
We've actually made quite a few bug fixes since RC4 (RC5 is not
available yet). Would you mind trying with a nightly snapshot
tarball?
(there were
n bindings:
http://www.open-mpi.org/papers/euro-pvmmpi-2005-fortran
On Nov 10, 2005, at 8:17 AM, Jeff Squyres wrote:
Great Leaping Lizards, Batman!
Unbelievably, the MPI_Reduce interfaces were left out. I'm going to
go a complete F90 audit right now to ensure that no other interfaces
3 0x08049b4e in main (argc=4, argv=0xbf983d54) at main.c:13
(gdb)
Jeff Squyres wrote:
I'm sorry -- I wasn't entirely clear:
1. Are you using a 1.0 nightly tarball or a 1.1 nightly tarball? We
have made a bunch of fixes to the 1.1 tree (i.e., the Subversion
trunk), but have not fully
(MCA v1.0, API v1.0, Component v1.0)
MCA sds: seed (MCA v1.0, API v1.0, Component v1.0)
MCA sds: singleton (MCA v1.0, API v1.0, Component
v1.0)
MCA sds: slurm (MCA v1.0, API v1.0, Component v1.0)
[clement@kfc TestMPI]$
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
At long last, 1.0rc5 is available for download. It fixes all known
issues reported here on the mailing list. We still have a few minor
issues to work out, but things appear to generally be working now.
Please try to break it:
http://www.open-mpi.org/software/v1.0/
--
{+} Jeff
(MPI_Init( &argc, &argv ));
(gdb) up
#6 0x08049362 in MAIN__ () at Halo.f:19
19CALL MPI_INIT(MPIERR)
Current language: auto; currently fortran
(gdb)
-Original Message-----
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
Behalf Of Jeff Squyres
Sent: Wedne
omplete once... somehow.)
HPL and HPCC also would exit, producing the same errors.
If there's anything else I may have left out, I'll see what I can do.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ytes. It is not homogenous (BSquared) but a good test set up.
If you have any advice, Please tell me and I could try it out.
Thank you and good luck!
Allan
On Oct 27, 2005, at 10:19 AM, Jeff Squyres wrote:
> On Oct 19, 2005, at 12:04 AM, Allan Menezes wrote:
>
>
>> We
One other thing I noticed... You specify -mca btl openib. Try
specifying -mca btl openib,self. The self component is used for
"send to self" operations. This could be the cause of your failures.
Brian
On Nov 13, 2005, at 3:02 PM, Jeff Squyres wrote:
Troy --
Were you perchance
/v1.0/
Thanks!
On Nov 13, 2005, at 7:53 PM, Jeff Squyres wrote:
I can't believe I missed that, sorry. :-(
None of the btl's are capable of doing loopback communication except
"self." Hence, you really can't run "--mca btl foo" if your app ever
sends to itself
mpi-1.0rc6/ib/etc --disable-shared
--enable-static --with-bproc
--with-mvapi=/opt/IB/ibgd-1.8.0/driver/infinihost
Any advice appreciated!
Daryl
_______
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
sues
are pretty much the same for Open MPI.
But it's a good test bed and I have no problems installing Oscar 4.2
on it.
See my later post Hpl and TCP today where I tried 0b1 without mca pml
teg and so on and get a good performance with 15 nodes and open mpi
rc6.
Thank you very much
rrors.
TEG is the prior generation point-to-point PML component; it uses the
PTL components. OB1 is the next generation point-to-point component;
it uses BTL components.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
#4 0x00d59ab8 in PMPI_Init (argc=0xbf993bf0, argv=0xbf993bf4) at
pinit.c:71
#5 0x08048904 in main (argc=1, argv=0xbf993c74) at cpi.c:20
(gdb)
I attached the mpi program. I do hope you can help me. Many thanks.
Clement
Jeff Squyres wrote:
One minor thing that I notice in your ompi_info
rg/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
now simply a synonym
for --debug (i.e., it won't go away because of the precedent set in the
1.0.x series).
On Oct 20, 2005, at 4:26 PM, Jeff Squyres wrote:
You and Chris G. raise a good point -- another parallel debugger vendor
has contacted me about the same issue (their debugger doe
behave properly, failing over
to a
usable interface and continuing execution.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
005, at 11:42 AM, Anthony Chan wrote:
Thanks for fixing this. Will there be a patched release of openmpi
that will contain this fix anytime soon ? (In the meantime, I could
do a
anonymous read on the svn repository.)
A.Chan
On Tue, 22 Nov 2005, Jeff Squyres wrote:
Whoops! We forgot to instan
MCA btl: parameter "btl_tcp_min_send_size" (current
value: "65536")
MCA btl: parameter "btl_tcp_max_send_size" (current
value: "131072")
MCA btl: parameter "btl_tcp_min_rdma_size" (current
value: "1310
This
is on the to-do list, but it didn't happen for 1.0.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
.1 release candidates:
http://www.open-mpi.org/software/v1.0/
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
then *both* static and
shared libraries are built. If both are build, all the OMPI tools
(like orted) will link against the dynamic libraries -- this appears to
be a libtool default (i.e., our Makefile.am's link against
/libmpi.la and friends).
--
{+} Jeff Squyres
{+} The Open M
er warning activated) and this is really annoying.
Excellent catch. We didn't explicitly test for this case with -Wall
for compiling C++ codes. I'll put this in tonight as well (be sure to
let me know if I miss any).
Many thanks!
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
with this off-list to get into the
nitty-gritty of the OVERFLOW code and see what's going on. We'll
post back with the final resolution.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
5 17:16 /dev/infiniband/
uverbs0
Daryl
--
* Correspondence *
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
up to the hard limit, so all the user could do with ulimit is
move the per-user limit back down.
brian
On Dec 6, 2005, at 3:35 PM, Jeff Squyres wrote:
With Tim's response to this -- I'm curious (so that we get correct
information on the FAQ) -- is the /etc/security/limit.conf method
[] array_of_mpi_info;
+
return newcomm;
}
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
Yes, it is. It is from mpirun telling you that 33 processes -- in
addition to the error message that it must have shown above that --
aborted. So I'm guessing that 34 total processes aborted.
Are you getting corefiles for these processes? (might need to check
the limit of your coredumpsize)
ible for
mpirun to search automatically for libmpi.so &
friends so to avoid the redundant --prefix to
mpirun ?
Thanks,
A.Chan
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+}
ctually a lie (there is only 1 processor -- not 4), and can
cause extremely bad performance.
-
Hope that clears up the issue. Sorry about that!
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
may actually be parsed as exactly that, but I wasn't
entirely sure)
Yes, that is what I meant. The change should make things easier for
typical MPI users.
Ok, I've added it to the to-do list for the v1.1 series (we're really
only doing bug fixes to the v1.0 series).
--
{+} Jeff
nclude specific information about this issue:
http://www.open-mpi.org/faq/?category=building#build-rte
Torque plans to ship shared libraries someday (at which point this
issue becomes moot), but the exact timing is currently unknown.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
C
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
Computing
Hsinchu, Taiwan, ROC
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ot exist in the 1.0.x series.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
another SuSE10/AMD 64 computer. Something must be missing.
Best regards
Jyh-Shyong Ho, Ph.D.
Research Scientist
National Center for High Performance Computing
Hsinchu, Taiwan, ROC
Jeff Squyres wrote:
What concerns me, though, is that Open MPI shouldn't have tried to
compile support for
make[1]: Leaving directory `/cygdrive/c/home/devbin/obj/opal'
make: *** [all-recursive] Error 1
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
ry
makefile is empty. I've enclosed the log and screen output.
Thanks,
Kraig
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
1 - 100 of 4582 matches
Mail list logo