MX MTL or BTL? Can you send a
small program that reproduces this abort?
Scott
On Jun 11, 2009, at 12:25 PM, Brian Barrett wrote:
Neither the CM PML or the MX MTL has been looked at for thread
safety. There's not much code to cause problems in the CM PML.
The MX MTL would likely need
Neither the CM PML or the MX MTL has been looked at for thread
safety. There's not much code to cause problems in the CM PML. The
MX MTL would likely need some work to ensure the restrictions Scott
mentioned are met (currently, there's no such guarantee in the MX MTL).
Brian
On Jun 11, 2
ilman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi
nfortunately I'm totally swamped on another
project (and trying to finish my thesis), so it's unlikely I'll be
able to look at it for a while.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
Sorry I haven't jumped in this thread earlier -- I've been a bit behind.
The multi-lib support worked at one time, and I can't think of why it
would have changed. The one condition is that libdir, includedir,
etc. *MUST* be specified relative to $prefix for it to work. It looks
like you w
licitly link in the extra library. Hopefully, this will resolve
some of these headaches.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
stuff the MPI
version
does, and we haven't tracked down the large memory grabs.
Could it be that this vmem is being grabbed by the OpenMPI memory
manager rather than directly by the app?
Ciao
Terry
___
users mailing list
us...@open-mpi.org
t bets are either to
increase the max stack size or (more portably) just allocate
everything on the heap with malloc/new.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
h
a compiler's own memory management code.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
e() constin ccQqJJlF.o
MPI::Op::~Op() in ccQqJJlF.o
MPI::Op::~Op() in ccQqJJlF.o
"MPI::FinalizeIntercepts()", referenced from:
MPI::Finalize() in ccQqJJlF.o
"MPI::COMM_WORLD", referenced from:
__ZN3MPI10COMM_WORLDE$non_lazy_ptr in ccQqJJlF.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
configure, and what was the full output of
configure?
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ome corner case. The
first question that needs to be asked is for the AIX / Power PC
machine you're running on, what is the right answer (as an IBM
employee, you're certainly more qualified to answer that than I am...).
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Jan 1, 2008, at 12:47 AM, Adam C Powell IV wrote:
On Mon, 2007-12-31 at 20:01 -0700, Brian Barrett wrote:
Yeah, this is a complicated example, mostly because HDF5 should
really be covering this problem for you. I think your only option at
that point would be to use the #define to not
properly protect their code
from C++...
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ld not include mpi.h from an extern "C"
block. It will fail, as you've noted. The proper solution is to not
be in an extern "C" block when including mpi.h.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
,
but
could you send all the compile/failure information?
http://www.open-mpi.org/community/help/
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Oct 16, 2007, at 11:56 AM, Jeff Squyres wrote:
On Oct 16, 2007, at 11:20 AM, Brian Granger wrote:
Wow, that is quite a study of the different options. I will spend
some time looking over things to better understand the (complex)
situation. I will also talk with Lisandro Dalcin about what
On Oct 10, 2007, at 1:27 PM, Dirk Eddelbuettel wrote:
| Does this happen for all MPI programs (potentially only those that
| use the MPI-2 one-sided stuff), or just your R environment?
This is the likely winner.
It seems indeed due to R's Rmpi package. Running a simple mpitest.c
shows no
erro
On Oct 5, 2007, at 8:48 PM, Dirk Eddelbuettel wrote:
With the (Debian package of the) current 1.2.4 release, I am seeing
a lot of
mca: base: component_find: unable to open osc pt2pt: file not
found (ignored)
that I'd like to suppress.
For these Debian packages, we added a (commented-ou
On Sep 29, 2007, at 5:15 PM, James Conway wrote:
What I notice here is that despite my specification of the Intel
compilers on the configure command line (including the correct c++
icpc compiler!) the libtool command that fails seems to be using gcc
(... --mode=link gcc ...) on the Xgrid sources
h from SVN, you can not use
recent CVS copies of Libtool, you'll have to use the same version
specified here:
http://www.open-mpi.org/svn/building.php
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Sep 28, 2007, at 4:56 AM, Massimo Cafaro wrote:
Dear all,
when I try to compile my MPI code on 64 bits intel Mac OS X the
build fails since the Open MPI library has been compiled using 32
bits. Can you please provide in the next version the ability at
configure time to choose between
On Sep 25, 2007, at 4:25 AM, Rayne wrote:
Hi all, I'm using the SGE system on my school network,
and would like to know if the errors I received below
means there's something wrong with my MPI_Recv
function.
[0,1,3][btl_tcp_frag.c:202:mca_btl_tcp_frag_recv]
mca_btl_tcp_frag_recv: readv failed w
On Sep 25, 2007, at 1:37 PM, Richard Graham wrote:
Josh Hursey did the port of Open MPI to CNL. Here is the config
line I have used to build
on the Cray XT4:
./configure CC=/opt/xt-pe/default/bin/snos64/linux-pgcc CXX=/opt/xt-
pe/default/bin/snos64/linux-pgCC F77=/opt/xt-pe/default/bin/sno
On Sep 10, 2007, at 1:35 PM, Lev Givon wrote:
When launching an MPI program with mpirun on an xgrid cluster, is
there a way to cause the program being run to be temporarily copied to
the compute nodes in the cluster when executed (i.e., similar to
what the
xgrid command line tool does)? Or is
On Sep 9, 2007, at 10:28 AM, Foster, John T wrote:
I'm having trouble configuring Open-MPI 1.2.4 with the Intel C++
Compiler v. 10. I have Mac OS X 10.4.10. I have succesfully
configured and built OMPI with the gcc compilers and a combination
of gcc/ifort. When I try to configure with icc
On Aug 28, 2007, at 10:59 AM, Lev Givon wrote:
Received from Brian Barrett on Tue, Aug 28, 2007 at 12:22:29PM EDT:
On Aug 27, 2007, at 3:14 PM, Lev Givon wrote:
I have OpenMPI 1.2.3 installed on an XGrid cluster and a separate
Mac
client that I am using to submit jobs to the head
On Aug 27, 2007, at 3:14 PM, Lev Givon wrote:
I have OpenMPI 1.2.3 installed on an XGrid cluster and a separate Mac
client that I am using to submit jobs to the head (controller) node of
the cluster. The cluster's compute nodes are all connected to the head
node via a private network and are not
On Aug 24, 2007, at 10:57 AM, Marwan Darwish wrote:
I keep on getting the following link error when compiling lam-mpi
on a macosx (in the release mode)
would moving to open-mpi resolve such issues, anybody with
experience in this
Moving to Open MPI will work around this issue. Another opti
On Aug 23, 2007, at 4:33 AM, Bernd Schubert wrote:
I need to compile a benchmarking program and absolutely so far do
not have
any experience with any MPI.
However, this looks like a general open-mpi problem, doesn't it?
bschubert@lanczos MPI_IO> make
cp ../globals.f90 ./; mpif90 -O2 -c ../glo
On Aug 22, 2007, at 2:35 PM, Higor de Padua Vieira Neto wrote:
At the end of the output file, just show this:
" (...lot of output ...)
config.status: creating opal/include/opal_config.h
config.status: creating orte/include/orte_config.h
config.status: orte/include/orte_config.h is unchanged
conf
On Aug 21, 2007, at 10:52 PM, Lev Givon wrote:
(Running ompi_info after installing the build confirms the absence of
said components). My concern, unsurprisingly, is motivated by a desire
to use OpenMPI on an xgrid cluster (i.e., not with rsh/ssh); unless I
am misconstruing the above observation
On Aug 21, 2007, at 3:32 PM, Lev Givon wrote:
configure: WARNING: *** Shared libraries have been disabled (--
disable-shared)
configure: WARNING: *** Building MCA components as DSOs
automatically disabled
checking which components should be static... none
checking for projects containing MCA
On Aug 2, 2007, at 4:22 PM, Glenn Carver wrote:
Hopefully an easy question to answer... is it possible to get at the
values of mca parameters whilst a program is running? What I had in
mind was either an open-mpi function to call which would print the
current values of mca parameters or a func
On Jul 26, 2007, at 7:43 PM, Mathew Binkley wrote:
../../libtool: line 460: CDPATH: command not found
libtool: Version mismatch error. This is libtool 2.1a, but the
libtool: definition of this LT_INIT comes from an older release.
libtool: You should recreate aclocal.m4 with macros from libtool
On Jul 19, 2007, at 3:24 PM, Moreland, Kenneth wrote:
I've run into a problem with the File I/O with openmpi version 1.2.3.
It is not possible to call MPI_File_set_view with a datatype created
from a subarray. Instead of letting me set a view of this type, it
gives an invalid datatype error. I
c++ --program-transform-name=/^
[cg][^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --
with-slibdir=/usr/lib --build=powerpc-apple-darwin8 --with-
arch=nocona --with-tune=generic --program-prefix= --host=i686-apple-
darwin8 --target=i686-apple-darwin8
Thread model: posix
gcc versio
I wouldn't worry about it. 1.2.3 has no ROMIO fixes over 1.2.2.
Brian
On Jul 16, 2007, at 9:42 AM, jody wrote:
Brian,
I am using OpenMPI 1.2.2, so i am lagging a bit behind.
Should i update to 1.2.3 and do the test again?
Thanks for the info
Jody
On 7/16/07, Brian Barrett wrote:
Jody -
I usually update the ROMIO package before each major release (1.0,
1.1, 1.2, etc.) and then only within a major release series when a
bug is found that requires an update. This seems to be one of those
times ;). Just to make sure we're all on the same page, which
version of Open
On Jul 15, 2007, at 10:05 PM, Isaac Huang wrote:
Hello, I read from the FAQ that current Open MPI releases don't
support end-to-end data reliability. But I still have some confusing
that can't be solved by googling or reading the FAQ:
1. I read from "MPI - The Complete Reference" that "MPI prov
: /usr/local
Configured architecture: i386-apple-darwin8.10.1
Hi Brian,
1.2.3 downloaded and built from source.
Tim
On 12/07/2007, at 12:50 AM, Brian Barrett wrote:
Which version of Open MPI are you using?
Thanks,
Brian
On Jul 11, 2007, at 3:32 AM, Tim Cornwell wrote:
I have a problem running ope
Which version of Open MPI are you using?
Thanks,
Brian
On Jul 11, 2007, at 3:32 AM, Tim Cornwell wrote:
I have a problem running openmpi under OS 10.4.10. My program runs
fine under debian x86_64 on an opteron but under OS X on a number
of Mac Book and Mac Book Pros, I get the following
What Ralph said is generally true. If your application completed,
this is nothing to worry about. It means that an error occurred on
the socket between mpirun ad some other process. However, combind
with the travor0 errors in the log files, it could mean that your
IPoIB network is acting
On Jul 10, 2007, at 11:40 AM, Scott Atchley wrote:
On Jul 10, 2007, at 1:14 PM, Christopher D. Maestas wrote:
Has anyone seen the following message with Open MPI:
---
warning:regcache incompatible with malloc
---
---
We don't see this message with mpich-mx-1.2.7..4
MX has an internal reg
On Jul 4, 2007, at 8:21 PM, Graham Jenkins wrote:
I'm using the openmpi-1.1.1-5.el5.x86_64 RPM on a Scientific Linux 5
cluster, with no installed HCAs. And a simple MPI job submitted to
that
cluster runs OK .. except that it issues messages for each node
like the
one shown below. Is there
On Jun 7, 2007, at 9:04 PM, Code Master wrote:
nction `_int_malloc':
: multiple definition of `_int_malloc'
/usr/lib/libopen-pal.a(lt1-malloc.o)(.text+0x18a0):openmpi-1.2.2/
opal/mca/memory/ptmalloc2/malloc.c:3954: first defined here
/usr/bin/ld: Warning: size of symbol `_int_malloc' changed fr
On Jun 11, 2007, at 9:27 AM, Brock Palen wrote:
With openmpi-1.2.0
i ran a: ompi_info --param btl tcp
and i see reference to:
MCA btl: parameter "btl_tcp_min_rdma_size" (current value: "131072")
MCA btl: parameter "btl_tcp_max_rdma_size" (current value:
"2147483647")
Can TCP support RDMA
Or tell Open MPI not to build torque support, which can be done at
configure time with the --without-tm option.
Open MPI tries to build support for whatever it finds in the default
search paths, plus whatever things you specify the location of. Most
of the time, this is what the user wants
Hi Rich -
All the releases back to the 0.9 pre-release included a #define of
OPEN_MPI to 1 in mpi.h, so that would be a good way to find out if
you are using Open MPI or not.
Hope this helps,
Brian
On Jun 5, 2007, at 1:36 PM, Lie-Quan Lee wrote:
I was thinking of this way. Is macro OPEN
On Jun 1, 2007, at 12:15 PM, Bert Wesarg wrote:
Hello,
is the 'EGREP' a typo in the first hunk of r14829:
https://svn.open-mpi.org/trac/ompi/changeset/14829/trunk/config/
cxx_find_template_repository.m4
Gah! Yes, it is. Should be $GREP. I'll fix this evening.
Thanks,
Brian
Bill -
This is a known issue in all released versions of Open MPI. I have a
patch that hopefully will fix this issue in 1.2.3. It's currently
waiting on people in the OPen MPI team to verify I didn't do
something stupid.
Brian
On May 29, 2007, at 9:59 PM, Bill Saphir wrote:
George,
On May 29, 2007, at 12:25 PM, smai...@ksu.edu wrote:
I am doing a research on parallel computing on shared memory with
NUMA architecture. The system is a 4 node AMD opteron with each node
being a dual-core. I am testing an OpenMPI program with MPI-nodes <=
MAX cores available on system (in
On May 22, 2007, at 7:52 PM, Tom Clune wrote:
For example, if it is ppp0, try:
mpirun -np 1 -mca oob_tcp_exclude ppp0 uptime
This seems to at least produce a bit of output before hanging:
LM000953070:~ tlclune$ mpirun -np 1 -mca oob_tcp_exclude ppp0 uptime
[153.sub-70-211-6.myvzw.com:0756
On May 21, 2007, at 7:40 PM, Tom Clune wrote:
Executive summary: mpirun hangs when laptop is connected via
cellular modem.
Longer description: Under ordinary circumstances mpirun behaves as
expected on my OS X (Intel-duo) laptop. I only want to be using
the shared-memory mechanism - i.e
On May 13, 2007, at 6:23 AM, Bert Wesarg wrote:
Even better: is there a patch available to fix this in the 1.2.1
tarball, so that
I can set the full path again with CC?
The patch is quite trivial, but requires a rebuild of the build
system
(autoheader, autoconf, automake,...)
see here:
htt
I fixed the OOB. I also mucked some things up with it interface wise
that I need to undo :). Anyway, I'll have a look at fixing up the
TCP component in the next day or two.
Brian
On May 10, 2007, at 6:07 PM, Jeff Squyres wrote:
Brian --
Didn't you add something to fix exactly this probl
On May 14, 2007, at 10:21 AM, Nym wrote:
I am trying to use MPI_TYPE_STRUCT in a 64 bit Fortran 90 program. I'm
using the Intel Fortran Compiler 9.1.040 (and C/C++ compilers
9.1.045).
If I try to call MPI_TYPE_STRUCT with the array of displacements that
are of type INTEGER(KIND=MPI_ADDRESS_KIND
This was a regression in Open MPI 1.2.1. We improperly handle the
situation where CC has a path in it. We will have this fixed in Open
MPI 1.2.2. For now, your options are to use Open MPI 1.2 or specify
a $CC without a path, such as CC=icc, and make sure $PATH is set
properly.
Brian
O
Thanks for the bug report. I'm able to replicate your problem, and
it will be fixed in the 1.2.2 release.
Brian
On May 7, 2007, at 6:10 AM, livelfs wrote:
Hi all
I have observed a regression between 1.2 and 1.2.1
if CC is assigned an absolute path (i.e. export
CC=/opt/gcc/gcc-3.4.4/bin/gc
Yup, it does. There's nothing in the standard that says it isn't
allowed to. Given the number of system/libc calls involved in doing
communication, pretty much every MPI function is going to change the
value of errno. If you expect otherwise, I'd modify your
application. Most cluster-ba
That is odd... Alpha Linux isn't one of our supported platforms, so
it doesn't get tested before release unless a user happens to try
it. Can you send the information requested here:
http://www.open-mpi.org/community/help/
That should help us figure out what happened.
Thanks,
Brian
On
On Apr 27, 2007, at 6:29 AM, Götz Waschk wrote:
I'm testing my new cluster installation with the hpcc benchmark and
openmpi 1.2.1 on RHEL5 32 bit. I have some trouble with using a
threaded BLAS implementation. I have tried ATLAS 3.7.30 compiled with
pthread support. It crashes as reported here:
That's very odd. The usual cause for this is /tmp being unwritable
by the user or full. Can you check to see if either of those
conditions are true?
Thanks,
Brian
On Apr 13, 2007, at 2:44 AM, Christine Kreuzer wrote:
Hi,
I run openmpi on a AMD Opteron with two dualcore processors an S
On Apr 12, 2007, at 3:45 AM, Bas van der Vlies wrote:
Jeff Squyres wrote:
On Apr 11, 2007, at 8:08 AM, Bas van der Vlies wrote:
The OMPI_CHECK_PACKAGE macro is a rather nasty macro that tries
to reduce the replication of checking for a header then a
library, then setting CFLAGS, LDFLAGS,
On Mar 30, 2007, at 4:23 PM, rohit_si...@logitech.com wrote:
I'm somewhat new to OpenMPI, but I'm currently evaluating it as a
communications mechanism between Windows and Unix servers.
I noticed that under your FAQs (
http://www.open-mpi.org/faq/?category=supported-systems), it says:
Th
On Apr 9, 2007, at 12:36 PM, Brian Barrett wrote:
On Apr 6, 2007, at 7:22 AM, Werner Van Geit wrote:
In our lab we are installing OpenMPI onto our Apple cluster
computer. The cluster contains a couple of PowerPC G5 nodes and
the new Intel Xeon Xserves, all with a clean install of Mac OS X
On Apr 6, 2007, at 7:22 AM, Werner Van Geit wrote:
In our lab we are installing OpenMPI onto our Apple cluster
computer. The
cluster contains a couple of PowerPC G5 nodes and the new Intel Xeon
Xserves, all with a clean install of Mac OS X Server 10.4.8 , Xcode
2.4.1
and Sun Grid Engine 6 (s
On Apr 7, 2007, at 12:59 AM, Brian Powell wrote:
Greetings,
I turn to the assistance of the OpenMPI wizards. I have compiled
v1.2 using gcc and ifort (see the attached config.log) with a
variety of options. The compilation finishes (side note: I had to
define NM otherwise the configure sc
Once I do
some
more testing, I'll bring things up on IB and see how things are going.
-Mike
Mike Houston wrote:
Brian Barrett wrote:
On Mar 20, 2007, at 3:15 PM, Mike Houston wrote:
If I only do gets/puts, things seem to be working correctly with
version
1.2. However, if I have a
On Mar 25, 2007, at 11:20 AM, Daniele Avitabile wrote:
Hi everybody,
I am trying to install open mpi on a Mac Os XServer, and the make
all command exits with the error
openmpi_install_failed.tar.gz
as you can see from the output files I attached.
Some comments that may be helpful:
1) I a
On Mar 20, 2007, at 3:15 PM, Mike Houston wrote:
If I only do gets/puts, things seem to be working correctly with
version
1.2. However, if I have a posted Irecv on the target node and issue a
MPI_Get against that target, MPI_Test on the posed IRecv causes a
segfaults:
Anyone have suggesti
Hi -
Thanks for the bug report. I've fixed the problem in SVN and it will
likely be part of the 1.2.1 release (whenever that happens). In the
mean time, I've attached a patch that should apply to the 1.2 tarball
that will also fix the problem.
The environment variables you want for spec
Sure, we can add a FAQ entry on that :).
At present, configure decides whether Open MPI will be installed on a
case sensitive file-system or not based on what the build file system
does. Which is far from perfect, but covers 99.9% of the cases. You
happen to be the .1%, but we do have an
On Feb 27, 2007, at 3:26 PM, Iannetti, Anthony C. ((GRC-RTB0)) wrote:
Dear Open-MPI:
I am still ahving problems building OpenMPI 1.2 (now 1.2b4) on
MacOSX 10.4 PPC 64. In a message a while back, you gavce me a hack
to override this problem. I believe it was a problem with Libtool,
or
What platform / operating system was this with?
Brian
On Feb 15, 2007, at 3:43 PM, Steven A. DuChene wrote:
I am trying to do some simple fortran MPI examples to verify I have
a good installation
of OpenMPI and I have a distributed program that calculates PI. It
seems to compile
and work fi
It's not exactly friendly that the Debian developer decided to change
the include directory for Torque from $prefix/include to $prefix/
include/torque, but I'm not sure it's "wrong".
Unfortunately, we don't handle that case properly by default. A
workaround that shouldn't give you any probl
ersity
http://www.math.jmu.edu/~martin
phone: (+1) 540-568-5101
fax: (+1) 540-568-6857
"Ever my heart rises as we draw near the mountains.
There is good rock here." -- Gimli, son of Gloin
___
users mailing list
us...@open-mpi.org
http://ww
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI Team, CCS-1
Los Alamos National Laboratory
n
you have MX support and something else is going on. Otherwise, the
issue is building MX support into your copy of Open MPI.
Hope this helps,
Brian
--
Brian Barrett
Open MPI Team, CCS-1
Los Alamos National Laboratory
10:27 -0700, Brian Barrett wrote:
Ah, are you using Open MPI 1.1.x, by chance? The wrapper compilers
need to be able to find a text file in $prefix/share/openmpi/, where
$prefix is the prefix you gave when you configured Open MPI. If that
path is different on two hosts, the wrapper compilers can
one have a suggestion how I
can
keeps this on a nfs share and make it work? Thank you
Mac os x 10.3 cluster
-dan
On Tue, 2006-10-17 at 22:15 -0600, Brian Barrett wrote:
On Oct 17, 2006, at 6:41 PM, Dan Cardin wrote:
Hello all, I have installed openmpi on a small apple panther
cluster.
The in
Open MPI are you using? Also, what is the output of:
mpicc -showme
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Fri, 2006-09-22 at 10:39 +0200, alberto wrote:
> When I execute the simple program:
>
> #include "mpi.h"
>
> int main(int argc, char *argv[])
> {
> MPI_Init(&argc, &argv);
> MPI_Finalize();
> return 1;
> }
>
>
> without Xgrid it succed (the execution finish).
> If I add the envir
On Tue, 2006-09-26 at 14:45 -0400, Brock Palen wrote:
> I have a code that requires that it be compiled (with the pgi
> compilers) with the -i8
>
> From the pgf90 man page:
>
> -i8Treat default INTEGER and LOGICAL variables as eight bytes.
> For operations
>involving int
t in OpenMPI, but it sure is a subtle
> aspect of using it. I will probably document this somewhere in the
> package I am creating.
>
>
> Thanks
>
>
> Brian
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Sep 6, 2006, at 9:00 AM, Brian Barrett wrote:
>
> &g
trunk. It should eventually be migrated into the
branch for the 1.2 release once we sort out the other Alpha issues.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Sep 8, 2006, at 8:18 PM, Nuno Sucena Almeida wrote:
The other issue is the one described in
http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/
msg229867.html
(...)
gcc -O3 -DNDEBUG -fno-strict-aliasing -pthread -o .libs/opal_wrapper
opal_wrapper.o -Wl,--export-dynamic ../../.
On Wed, 2006-09-06 at 10:40 -0700, Tom Rosmond wrote:
> Brian,
>
> I notice in the OMPI_INFO output the following parameters that seem
> relevant to this problem:
>
> MCA btl: parameter "btl_self_free_list_num" (current
> value: "0")
> MCA btl: parameter "btl_sel
On Mon, 2006-09-04 at 11:01 -0700, Tom Rosmond wrote:
> Attached is some error output from my tests of 1-sided message
> passing, plus my info file. Below are two copies of a simple fortran
> subroutine that mimics mpi_allgatherv using mpi-get calls. The top
> version fails, the bottom runs OK.
on Linux as well to see if I get the same error. One
> important piece of the puzzle is that if I configure openmpi with the
> --disable-dlopen flag, I don't have the problem. I will do some
> further testing on different systems and get back to you.
>
>
> Thanks for
Your example is pretty close to spot on. You want to convert the
Fortran handle (integer) into a C handle (something else). Then use the
C handle to call C functions. The one thing of note is that you should
use the type MPI_Fint instead of int for the type of the Fortran
handles. So your paral
This is quite strange, and we're having some trouble figuring out
exactly why the opening is failing. Do you have a (somewhat?) easy list
of instructions so that I can try to reproduce this?
Thanks,
Brian
On Tue, 2006-08-22 at 20:58 -0600, Brian Granger wrote:
> HI,
>
> I am trying to dynamica
; cause Bad Things to occur if you try to exchange MPI_LONGs between the MPI
> processes, right? (and similar for other datatypes that are different
> sizes)
>
>
> On 8/30/06 9:38 AM, "Brian Barrett" wrote:
>
> > Actually, Jeff is incorrect. As of Open MPI 1.
Actually, Jeff is incorrect. As of Open MPI 1.1, we do support endian
conversion between peers. It has not been as well tested as the rest of
the code base, but it should work. Please let us know if you have any
issues with that mode and we'll work to resolve them.
Brian
On Wed, 2006-08-30 at
atibility, this usually
isn't the case, so we don't try to be compatible across multiple
Fortran compilers.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Aug 18, 2006, at 3:19 PM, Hugh Merz wrote:
On Fri, 18 Aug 2006, Steven A. DuChene wrote:
I am attempting to build OpenMPI-1.1 on a RHEL4u2 system that has
the standard gfortran install as part of the distro and with a
self installed
recent version of g95 from g95.org but when I use the FC
On Aug 17, 2006, at 4:43 PM, Jonathan Underwood wrote:
Compiling an mpi program with gcc options -pedantic -Wall gives the
following warning:
mpi.h:147: warning: ISO C90 does not support 'long long'
So it seems that the openmpi implementation doesn't conform to C90. Is
this by design, or shoul
On Tue, 2006-08-15 at 14:24 -0700, Tom Rosmond wrote:
> I am continuing to test the MPI-2 features of 1.1, and have run into
> some puzzling behavior. I wrote a simple F90 program to test 'mpi_put'
> and 'mpi_get' on a coordinate transformation problem on a two dual-core
> processor Opteron work
On Mon, 2006-08-14 at 10:57 -0400, Brock Palen wrote:
> We will be evaluating pvfs2 (www.pvfs.org) in the future. Is their
> any special considerations to take to get romio support with openmpi
> with pvfs2 ?
> I have the following from ompi_info
>
> MCA io: romio (MCA v1.0, API v1.0, Compone
On Mon, 2006-07-31 at 13:12 -0400, James McManus wrote:
> I'm trying to compile MPI with pgf90. I use the following configure
> settings:
>
> ./configure --prefix=/usr/local/mpi F90=pgf90 F77=pgf77
>
> However, the compiler is set to gfortran:
>
> *** Fortran 90/95 compiler
> checking for gfort
1 - 100 of 274 matches
Mail list logo