On Jun 14, 2005, at 4:34 PM, Benjamin Allan wrote:
It would be nice if the c++ compiler wrapper were
installed under mpicxx, mpiCC, and mpic++ instead of
just the latter 2.
Yeah, we can do that, no problem. It won't be in the soon-to-ship
beta, but will be in the 1.0 and the SVN trunk.
A
On Jun 15, 2005, at 12:23 PM, Bogdan Costescu wrote:
On Tue, 14 Jun 2005, Brian Barrett wrote:
It would be nice if the c++ compiler wrapper were
installed under mpicxx, mpiCC, and mpic++ instead of
just the latter 2.
Yeah, we can do that, no problem.
Sorry for the silly question, but is
ue. As for our
future release plans beyond the beta, you might want to take a look
at the mailing list archives from last month.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
mmitted tonight. Should be in
tonight's nightly build.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
omic primitives available for
mips64-unknown-linux-gnu
Can you send config.log to the list? It has oodles of information
that will help figure out what is going on.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Sep 1, 2005, at 8:11 PM, Borenstein, Bernard S wrote:
I tried a recent tarball of open-mpi and it worked well on my 64
bit linux cluster running the Nasa Overflow 1.8ab
cfd code. I’m looking forward to the 1.0 release of the product.
The only strange thing I noticed is that the output
e properly.
Thanks!
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
mips64.diff
Description: Binary data
On Sep 11, 2005, at 8:38 AM, Greg Lindahl wrote:
On Sat, Sep 10, 2005 at 04:45:48PM -0500, Brian Barrett wrote:
I think that this is a fairly easy fix - Irix identifies any MIPS
chip as a mips-* from config.guess, but Linux apparently makes a
distinction between mips and mips64.
That
Gah - shame on me. I let some IRIX-specific stuff slip through.
Lemme see if I can find an IRIX box and clean that up. The problems
you listed below are not MIPS 32 / MIPS 64 issues, but the use of
some nice IRIX-specific macros. By the way, to clarify, the assembly
has been tested on a
27;t
run into any problems.
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Sep 28, 2005, at 3:46 PM, Borenstein, Bernard S wrote:
I posted an issue with the Nasa Overflow 1.8 code and have traced
it further to a program failure in the malloc
areas of the code (data in these areas gets corrupted). Overflow
is mostly fortran, but since it is an old program,
it
linker. Fixing the CFLAGS (it may actually be FFLAGS, but I think
it's the CFLAGS) should fix the problem.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
]: *** [install-recursive] Error 1
make: *** [install-recursive] Error 1
There really isn't enough information here to be helpful. Can you
include the output of all of "make" and "make install"? It's likely
the error occurred much earlier in the build process.
Thanks,
Bri
27;ve committed a change to fix this problem (and some problems
with the operation of the XGrid starter). Tonight's nightly builds
and 1.0rc3 should have the fixes.
Thanks again,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
C
compiler is found as gcc. Can you try making sure $CC isn't set to
cc in your environment or in the build script and see if that helps?
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ersion of
IRIX was this on?
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
. As for not
being able to find HPL.dat, I'm not sure why that would be a problem
- are you sure the file exists in the same directory as the xhpl
binary (on all nodes)?
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
y
tarballs that will be available tomorrow morning. Release candidates
and betas will be available at the URL below:
http://www.open-mpi.org/software/
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
solution for 1.0
will be to remove the UNIQ PML, which will solve the problem (along
with some other problems you're not even aware exist...). Should be
in the next rc.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
the data ready
for transmission.
I'm not aware of any packages that make using the STL and MPI easier,
but it's possible I've missed them.
Brian
--
Brian Barrett
Graduate Student, Open Systems Lab, Indiana University
http://www.osl.iu.edu/~brbarret/
MPI Project
{+} http://www.open-mpi.org/
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
Daryl -
I'm unable to replicate your problem. I was testing on a Fedora Core
3 system with Clustermatic 5. Is is possible that you have a random
dso from a previous build in your installation path? How are you
running mpirun -- maybe I'm just not hitting the same code path you
are...
.0 due to time constraints. I have added this to the
list of known issues and we will be investigating it as time permits.
For now, my only suggestion is to use LinuxThreads instead of NPTL on
your cluster.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Nov 17, 2005, at 9:20 AM, Brian Barrett wrote:
I'm unable to replicate your problem. I was testing on a Fedora Core
3 system with Clustermatic 5. Is is possible that you have a random
dso from a previous build in your installation path? How are you
running mpirun -- maybe I'
Wl,-rpath,pathB"
will do essentially the same thing and get you past the OMPI_UNIQ
bug. I believe (again, could be wrong) that most compilers will
parse that into the correct number of options to pass to the linker.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Nov 18, 2005, at 9:37 AM, Brian Barrett wrote:
On Nov 18, 2005, at 2:54 AM, Dries Kimpe wrote:
I have a question about the --with-wrapper-ldflags option;
I need to pass 2 different rpaths to the wrapper compilers,
so I tried
- --with-wrapper-ldflags="-Wl,-rpath -Wl,pathA -Wl,-rpat
ca btl_base_exclude mvapi
or
-mca btl ^mvapi
They are essentially equivalent. The first will load the mvapi
component, but never schedule any fragments on it. The second will
just not load the mvapi component. Sometimes we actually anticipate
user requests - not often, but sometimes ;)
ure, but due to time constraints it will likely not
be in the 1.0.1 release. It should be in the 1.0.2 release, although
I can't give you a time table as to when we will have a 1.0.2 release.
Thanks for the report!
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org
process to select which
packages to install).
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
s properly?
http://www.open-mpi.org/~brbarret/download/
openmpi-1.1a1r8384.tar.gz
If that works for you, we'll push the change into Open MPI 1.0.1
(it's a very small change).
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
Generally, the user defaults to the soft limit and can use ulimit to
set values up to the hard limit. Tim's suggestion bumps the soft
limit up to the hard limit, so all the user could do with ulimit is
move the per-user limit back down.
brian
On Dec 6, 2005, at 3:35 PM, Jeff Squyres wrote
coise Roch tried the following version as you suggested
http://www.open-mpi.org/~brbarret/download/ openmpi-1.1a1r8384.tar.gz
Things go a little further but the make still fails.
Please find the logs attached.
Pierre.
Brian Barrett wrote:
On Dec 5, 2005, at 4:05 PM, Pierre Valiron wrote:
We do not currently
have a time table for releasing this work, but will announce when we
are ready for users to start testing our work.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
.
I have discovered today the 1.0.1 version on the open-mpi web page
and Francoise Roch tested it. The make goes a little further but
still fails. Plesase find the logs in attachment.
Good luck !
Pierre.
Brian Barrett wrote:
Thanks for the update. I've fixed the next bug in subv
bout 1MB of overhead that is
not there when components are linked into libmpi.{a,so} directly.
You can enable static libraries for Open MPI (which will cause the
build system to link components directly into libmpi.a) with the
configure options --enable-static --disable-shared.
Brian
o, if you could include the complete output
of "ompi_info", that would be much appreciated.
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
It's probably possible to use Bonjour / Zero Config for resource
discovery for Open MPI, but it really only helps in resource
discovery -- scheduling and allocating resources would still have to
be done. We do, however, support the use of Apple's XGrid system for
job startup.
Bri
our problem would be
the Torque team releasing their libraries as both shared and static
libraries. It doesn't appear their build system supports this
presently, which is most unfortunate as it prevents us from building
TM support as a DSO...
Hope this helps,
Brian
--
Brian Bar
path
(and not one from an older version of Open MPI or LAM or MPICH or
something?).
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
t looking.
If you have some test code you could share, I'd love to see it - it
would help in duplicating your results and finding a solution...
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
MPI from the trunk, as it will try to print a stack
trace when errors like the one above occur. But I would start with
trying the gdb method. Of course, if you have TotalView or another
parallel debugger, that would be even easier.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
be expected to do so, resulting
in very bad things.
Is there a reason that you want to disable the C++ bindings after
installation? They should be absolutely harmless if you aren't using
them. If that isn't the case, then we need to fix whatever is
causing your problems.
B
On Jan 17, 2006, at 3:17 AM, Yves Reymen wrote:
Brian Barrett wrote:
On Jan 16, 2006, at 11:32 AM, Yves Reymen wrote:
Recently openmpi v1.0.1 was installed on our cluster. It contains
all
parameters of ompi_config.h within a #ifndef OMPI_CONFIG_H. I am
wondering how it is possible to give
eful to us, if we could take a look (at least,
on the OMPI build that fails). Again, doing this with a build of
Open MPI that contains debugging symbols would greatly increase the
usefulness to us.
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
le but doesn't work, an error will
result. The default value is 'ssh : rsh', meaning that ssh
will be
used unless it isn't installed, in which case rsh will be used.
Please let me know if you have more questions.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
de Open MPI doesn't depend on this value -
the only place it is used is figuring out how far to go with
generating the function declarations for the Fortran 90 module file.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
already, but I've attached the
patch that was applied to the v1.0 branch to become part of the next
release.
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ompi_cxx.diff
Description: Binary data
...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
b site, which may take the pressure of you to generate some docs.
Thanks. I don't think it gets me off the hook with my boss ;), but
the more resources, the better for the Mac community.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
signon authentication. This is something I'm
hoping to have fixed for Open MPI 1.1, if I can find a properly
configured cluster to test on.
Hope this made some sense...
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
fix should be in the nightly builds in the next
couple of days, and will be part of the upcoming 1.0.2 release.
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
.@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
plication. If you are using the wrapper compilers,
can you run "mpicc -showme" and send the results to me? If you
aren't using the wrapper compilers, try adding the following to your
link flags:
-Wl,-u,_munmap -Wl,-multiply_defined,suppress
that should do the right magic to m
ewall to do this
(unfortunately, you can not configure the firewall using the System
Preferences GUI to do this).
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Jun 11, 2007, at 9:27 AM, Brock Palen wrote:
With openmpi-1.2.0
i ran a: ompi_info --param btl tcp
and i see reference to:
MCA btl: parameter "btl_tcp_min_rdma_size" (current value: "131072")
MCA btl: parameter "btl_tcp_max_rdma_size" (current value:
"2147483647")
Can TCP support RDMA
On Jun 7, 2007, at 9:04 PM, Code Master wrote:
nction `_int_malloc':
: multiple definition of `_int_malloc'
/usr/lib/libopen-pal.a(lt1-malloc.o)(.text+0x18a0):openmpi-1.2.2/
opal/mca/memory/ptmalloc2/malloc.c:3954: first defined here
/usr/bin/ld: Warning: size of symbol `_int_malloc' changed fr
On Jul 4, 2007, at 8:21 PM, Graham Jenkins wrote:
I'm using the openmpi-1.1.1-5.el5.x86_64 RPM on a Scientific Linux 5
cluster, with no installed HCAs. And a simple MPI job submitted to
that
cluster runs OK .. except that it issues messages for each node
like the
one shown below. Is there
On Jul 10, 2007, at 11:40 AM, Scott Atchley wrote:
On Jul 10, 2007, at 1:14 PM, Christopher D. Maestas wrote:
Has anyone seen the following message with Open MPI:
---
warning:regcache incompatible with malloc
---
---
We don't see this message with mpich-mx-1.2.7..4
MX has an internal reg
What Ralph said is generally true. If your application completed,
this is nothing to worry about. It means that an error occurred on
the socket between mpirun ad some other process. However, combind
with the travor0 errors in the log files, it could mean that your
IPoIB network is acting
Which version of Open MPI are you using?
Thanks,
Brian
On Jul 11, 2007, at 3:32 AM, Tim Cornwell wrote:
I have a problem running openmpi under OS 10.4.10. My program runs
fine under debian x86_64 on an opteron but under OS X on a number
of Mac Book and Mac Book Pros, I get the following
: /usr/local
Configured architecture: i386-apple-darwin8.10.1
Hi Brian,
1.2.3 downloaded and built from source.
Tim
On 12/07/2007, at 12:50 AM, Brian Barrett wrote:
Which version of Open MPI are you using?
Thanks,
Brian
On Jul 11, 2007, at 3:32 AM, Tim Cornwell wrote:
I have a problem running ope
On Jul 15, 2007, at 10:05 PM, Isaac Huang wrote:
Hello, I read from the FAQ that current Open MPI releases don't
support end-to-end data reliability. But I still have some confusing
that can't be solved by googling or reading the FAQ:
1. I read from "MPI - The Complete Reference" that "MPI prov
Jody -
I usually update the ROMIO package before each major release (1.0,
1.1, 1.2, etc.) and then only within a major release series when a
bug is found that requires an update. This seems to be one of those
times ;). Just to make sure we're all on the same page, which
version of Open
I wouldn't worry about it. 1.2.3 has no ROMIO fixes over 1.2.2.
Brian
On Jul 16, 2007, at 9:42 AM, jody wrote:
Brian,
I am using OpenMPI 1.2.2, so i am lagging a bit behind.
Should i update to 1.2.3 and do the test again?
Thanks for the info
Jody
On 7/16/07, Brian Barrett wrote:
c++ --program-transform-name=/^
[cg][^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --
with-slibdir=/usr/lib --build=powerpc-apple-darwin8 --with-
arch=nocona --with-tune=generic --program-prefix= --host=i686-apple-
darwin8 --target=i686-apple-darwin8
Thread model: posix
gcc versio
On Jul 19, 2007, at 3:24 PM, Moreland, Kenneth wrote:
I've run into a problem with the File I/O with openmpi version 1.2.3.
It is not possible to call MPI_File_set_view with a datatype created
from a subarray. Instead of letting me set a view of this type, it
gives an invalid datatype error. I
On Jul 26, 2007, at 7:43 PM, Mathew Binkley wrote:
../../libtool: line 460: CDPATH: command not found
libtool: Version mismatch error. This is libtool 2.1a, but the
libtool: definition of this LT_INIT comes from an older release.
libtool: You should recreate aclocal.m4 with macros from libtool
On Aug 2, 2007, at 4:22 PM, Glenn Carver wrote:
Hopefully an easy question to answer... is it possible to get at the
values of mca parameters whilst a program is running? What I had in
mind was either an open-mpi function to call which would print the
current values of mca parameters or a func
On Aug 21, 2007, at 3:32 PM, Lev Givon wrote:
configure: WARNING: *** Shared libraries have been disabled (--
disable-shared)
configure: WARNING: *** Building MCA components as DSOs
automatically disabled
checking which components should be static... none
checking for projects containing MCA
On Aug 21, 2007, at 10:52 PM, Lev Givon wrote:
(Running ompi_info after installing the build confirms the absence of
said components). My concern, unsurprisingly, is motivated by a desire
to use OpenMPI on an xgrid cluster (i.e., not with rsh/ssh); unless I
am misconstruing the above observation
On Aug 22, 2007, at 2:35 PM, Higor de Padua Vieira Neto wrote:
At the end of the output file, just show this:
" (...lot of output ...)
config.status: creating opal/include/opal_config.h
config.status: creating orte/include/orte_config.h
config.status: orte/include/orte_config.h is unchanged
conf
On Aug 23, 2007, at 4:33 AM, Bernd Schubert wrote:
I need to compile a benchmarking program and absolutely so far do
not have
any experience with any MPI.
However, this looks like a general open-mpi problem, doesn't it?
bschubert@lanczos MPI_IO> make
cp ../globals.f90 ./; mpif90 -O2 -c ../glo
On Aug 24, 2007, at 10:57 AM, Marwan Darwish wrote:
I keep on getting the following link error when compiling lam-mpi
on a macosx (in the release mode)
would moving to open-mpi resolve such issues, anybody with
experience in this
Moving to Open MPI will work around this issue. Another opti
On Aug 27, 2007, at 3:14 PM, Lev Givon wrote:
I have OpenMPI 1.2.3 installed on an XGrid cluster and a separate Mac
client that I am using to submit jobs to the head (controller) node of
the cluster. The cluster's compute nodes are all connected to the head
node via a private network and are not
On Aug 28, 2007, at 10:59 AM, Lev Givon wrote:
Received from Brian Barrett on Tue, Aug 28, 2007 at 12:22:29PM EDT:
On Aug 27, 2007, at 3:14 PM, Lev Givon wrote:
I have OpenMPI 1.2.3 installed on an XGrid cluster and a separate
Mac
client that I am using to submit jobs to the head
On Sep 9, 2007, at 10:28 AM, Foster, John T wrote:
I'm having trouble configuring Open-MPI 1.2.4 with the Intel C++
Compiler v. 10. I have Mac OS X 10.4.10. I have succesfully
configured and built OMPI with the gcc compilers and a combination
of gcc/ifort. When I try to configure with icc
On Sep 10, 2007, at 1:35 PM, Lev Givon wrote:
When launching an MPI program with mpirun on an xgrid cluster, is
there a way to cause the program being run to be temporarily copied to
the compute nodes in the cluster when executed (i.e., similar to
what the
xgrid command line tool does)? Or is
On Sep 25, 2007, at 1:37 PM, Richard Graham wrote:
Josh Hursey did the port of Open MPI to CNL. Here is the config
line I have used to build
on the Cray XT4:
./configure CC=/opt/xt-pe/default/bin/snos64/linux-pgcc CXX=/opt/xt-
pe/default/bin/snos64/linux-pgCC F77=/opt/xt-pe/default/bin/sno
On Sep 25, 2007, at 4:25 AM, Rayne wrote:
Hi all, I'm using the SGE system on my school network,
and would like to know if the errors I received below
means there's something wrong with my MPI_Recv
function.
[0,1,3][btl_tcp_frag.c:202:mca_btl_tcp_frag_recv]
mca_btl_tcp_frag_recv: readv failed w
On Sep 28, 2007, at 4:56 AM, Massimo Cafaro wrote:
Dear all,
when I try to compile my MPI code on 64 bits intel Mac OS X the
build fails since the Open MPI library has been compiled using 32
bits. Can you please provide in the next version the ability at
configure time to choose between
h from SVN, you can not use
recent CVS copies of Libtool, you'll have to use the same version
specified here:
http://www.open-mpi.org/svn/building.php
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Sep 29, 2007, at 5:15 PM, James Conway wrote:
What I notice here is that despite my specification of the Intel
compilers on the configure command line (including the correct c++
icpc compiler!) the libtool command that fails seems to be using gcc
(... --mode=link gcc ...) on the Xgrid sources
On Oct 5, 2007, at 8:48 PM, Dirk Eddelbuettel wrote:
With the (Debian package of the) current 1.2.4 release, I am seeing
a lot of
mca: base: component_find: unable to open osc pt2pt: file not
found (ignored)
that I'd like to suppress.
For these Debian packages, we added a (commented-ou
On Oct 10, 2007, at 1:27 PM, Dirk Eddelbuettel wrote:
| Does this happen for all MPI programs (potentially only those that
| use the MPI-2 one-sided stuff), or just your R environment?
This is the likely winner.
It seems indeed due to R's Rmpi package. Running a simple mpitest.c
shows no
erro
On Oct 16, 2007, at 11:56 AM, Jeff Squyres wrote:
On Oct 16, 2007, at 11:20 AM, Brian Granger wrote:
Wow, that is quite a study of the different options. I will spend
some time looking over things to better understand the (complex)
situation. I will also talk with Lisandro Dalcin about what
,
but
could you send all the compile/failure information?
http://www.open-mpi.org/community/help/
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ld not include mpi.h from an extern "C"
block. It will fail, as you've noted. The proper solution is to not
be in an extern "C" block when including mpi.h.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
properly protect their code
from C++...
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Jan 1, 2008, at 12:47 AM, Adam C Powell IV wrote:
On Mon, 2007-12-31 at 20:01 -0700, Brian Barrett wrote:
Yeah, this is a complicated example, mostly because HDF5 should
really be covering this problem for you. I think your only option at
that point would be to use the #define to not
ome corner case. The
first question that needs to be asked is for the AIX / Power PC
machine you're running on, what is the right answer (as an IBM
employee, you're certainly more qualified to answer that than I am...).
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
configure, and what was the full output of
configure?
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
e() constin ccQqJJlF.o
MPI::Op::~Op() in ccQqJJlF.o
MPI::Op::~Op() in ccQqJJlF.o
"MPI::FinalizeIntercepts()", referenced from:
MPI::Finalize() in ccQqJJlF.o
"MPI::COMM_WORLD", referenced from:
__ZN3MPI10COMM_WORLDE$non_lazy_ptr in ccQqJJlF.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
h
a compiler's own memory management code.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
t bets are either to
increase the max stack size or (more portably) just allocate
everything on the heap with malloc/new.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
stuff the MPI
version
does, and we haven't tracked down the large memory grabs.
Could it be that this vmem is being grabbed by the OpenMPI memory
manager rather than directly by the app?
Ciao
Terry
___
users mailing list
us...@open-mpi.org
licitly link in the extra library. Hopefully, this will resolve
some of these headaches.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
Sorry I haven't jumped in this thread earlier -- I've been a bit behind.
The multi-lib support worked at one time, and I can't think of why it
would have changed. The one condition is that libdir, includedir,
etc. *MUST* be specified relative to $prefix for it to work. It looks
like you w
nfortunately I'm totally swamped on another
project (and trying to finish my thesis), so it's unlikely I'll be
able to look at it for a while.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ilman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi
1 - 100 of 274 matches
Mail list logo