On Sun, 2006-06-11 at 04:26 -0700, imran shaik wrote:
> Hi,
> I some times get this error message.
> " 2 addtional processes aborted, possibly by openMPI"
>
> Some times 2 processes, sometimes even more. Is it due to over load or
> program error?
>
> Why does openMPI actually abort few processe
On Tue, 2006-06-13 at 10:51 -0700, Ken Mighell wrote:
> On May 6, 2006, Dries Kimpe reported a solution to getting
> pnetcdf to compile correctly with OpenMPI.
> A patch was given for the file
> mca/io/romio/romio/adio/common/flatten.c
> Has this fix been implemented in the nightly series?
Yes, t
foo.sh
chmod +x foo.sh
srun -N 4 -b foo.sh
But you can't submit your application directly without mpirun. This
is a feature we would like to support in the future, but there are
some licensing issues (we would have to link with their GPL'ed
libraries, which wouldn't
On Thu, 2006-06-15 at 13:46 -0700, Anoop Rajendra wrote:
> I'm trying to run a simple pi program compiled using openmpi.
>
> My command line and error message is
>
> [mpiuser@Pebble-anoop ~]$ mpirun -n 2 -hostfile /opt/openmpi/openmpi/
> etc/openmpi-default-hostfile /home/mpiuser/cpi2
> Signal:
unctionality (the option exists for platforms that have
broken half-support for GNU libc's stack trace feature, and for users
that don't like us registering a signal handler to do the work).
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
requested on our
"Getting Help" page:
http://www.open-mpi.org/community/help/
Also, it would be useful if you could run mpirun with the '-d'
option, to include more debugging information about why the launch is
failing.
Brian
--
Brian Barrett
Open MPI develop
On Wed, 2006-06-28 at 09:43 -0400, Patrick Jessee wrote:
> Hello. I've tracked down the source of the previously reported startup
> problem with Openmpi 1.1. On startup, it fails with the messages:
>
> mca_oob_tcp_accept: accept() failed with errno 9.
> :
>
> This didn't happen with 1.0.2.
We plan on
fixing this in the near future, and an error message will be printed
if this situation occurs.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
.
Thanks for the patch. The XGrid code is OS X only, so we still have
some work to do on Solaris. I'm not sure how this bug lived through
testing. I've applied it to our Subversion source and it will be
part of the Open MPI 1.1.1 release.
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
should fix the Open MPI build.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
define the type of MPI_DATATYPE. MPICH uses
integers, but Open MPI uses pointers to structures. Instead of using
your own #defines for datatypes, you need to use the defined MPI
datatypes (MPI_INT, MPI_DOUBLE, etc.) or derived datatypes.
Hope this helps,
Brian
--
Brian Barrett
Open MPI
of the Open MPI
commands) is not in your path on the remote node. You should take a
look at one of the other FAQ sections on the setup required for Open
MPI in an rsh/ssh type environment.
http://www.open-mpi.org/faq/?category=running
Hope this helps,
Brian
--
Brian Barrett
Open MPI d
on under a debugger or under a
memory checking debugger like Valgrind. That should help find the
problem.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
On Sat, 2006-07-01 at 00:25 +0200, Yvan Fournier wrote:
> Hello,
>
> I had encountered a bug in Open MPI 1.0.1 using indexed datatypes
> with MPI_Recv (which seems to be of the "off by one" sort), which
> was corrected in Open MPI 1.0.2.
>
> It seems to have resurfaced in Open MPI 1.1 (I encounte
uest in our
internal bug tracker, and add you to the list of people to be
notified when the ticket is updated.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ell (I use TotalView whenever possible), but has the
disadvantage of generally not being free.
Hope this helps,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
and allocation mechanisms of
LSF. I believe it is on our feature request list, but I also don't
believe we have a timeline for implementation.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ing, that would
definitely help in debugging your problem.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
vacation). This issue seems to be unique to your exact
configuration -- it doesn't happen with GCC on the Intel Mac nor on
Linux with the Intel compilers.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
ompile
your application for the lowest common denominator. My guess would
be that it easier and more foolproof if you compiled everything in 32
bit mode. If you run in a mixed mode, using application schemas (see
the mpirun man page) will be the easiest way to make things work.
Brian
--
On Wed, 2006-07-19 at 14:57 +0200, Paul Heinzlreiter wrote:
> After that I tried to compile VTK (http://www.vtk.org) with MPI support
> using OpenMPI.
>
> The compilation process issued the following error message:
>
> /home/ph/local/openmpi/include/mpi.h:1757:33: ompi/mpi/cxx/mpicxx.h: No
> suc
On Mon, 2006-07-31 at 13:12 -0400, James McManus wrote:
> I'm trying to compile MPI with pgf90. I use the following configure
> settings:
>
> ./configure --prefix=/usr/local/mpi F90=pgf90 F77=pgf77
>
> However, the compiler is set to gfortran:
>
> *** Fortran 90/95 compiler
> checking for gfort
On Mon, 2006-08-14 at 10:57 -0400, Brock Palen wrote:
> We will be evaluating pvfs2 (www.pvfs.org) in the future. Is their
> any special considerations to take to get romio support with openmpi
> with pvfs2 ?
> I have the following from ompi_info
>
> MCA io: romio (MCA v1.0, API v1.0, Compone
On Tue, 2006-08-15 at 14:24 -0700, Tom Rosmond wrote:
> I am continuing to test the MPI-2 features of 1.1, and have run into
> some puzzling behavior. I wrote a simple F90 program to test 'mpi_put'
> and 'mpi_get' on a coordinate transformation problem on a two dual-core
> processor Opteron work
On Aug 17, 2006, at 4:43 PM, Jonathan Underwood wrote:
Compiling an mpi program with gcc options -pedantic -Wall gives the
following warning:
mpi.h:147: warning: ISO C90 does not support 'long long'
So it seems that the openmpi implementation doesn't conform to C90. Is
this by design, or shoul
On Aug 18, 2006, at 3:19 PM, Hugh Merz wrote:
On Fri, 18 Aug 2006, Steven A. DuChene wrote:
I am attempting to build OpenMPI-1.1 on a RHEL4u2 system that has
the standard gfortran install as part of the distro and with a
self installed
recent version of g95 from g95.org but when I use the FC
atibility, this usually
isn't the case, so we don't try to be compatible across multiple
Fortran compilers.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
Actually, Jeff is incorrect. As of Open MPI 1.1, we do support endian
conversion between peers. It has not been as well tested as the rest of
the code base, but it should work. Please let us know if you have any
issues with that mode and we'll work to resolve them.
Brian
On Wed, 2006-08-30 at
; cause Bad Things to occur if you try to exchange MPI_LONGs between the MPI
> processes, right? (and similar for other datatypes that are different
> sizes)
>
>
> On 8/30/06 9:38 AM, "Brian Barrett" wrote:
>
> > Actually, Jeff is incorrect. As of Open MPI 1.
This is quite strange, and we're having some trouble figuring out
exactly why the opening is failing. Do you have a (somewhat?) easy list
of instructions so that I can try to reproduce this?
Thanks,
Brian
On Tue, 2006-08-22 at 20:58 -0600, Brian Granger wrote:
> HI,
>
> I am trying to dynamica
Your example is pretty close to spot on. You want to convert the
Fortran handle (integer) into a C handle (something else). Then use the
C handle to call C functions. The one thing of note is that you should
use the type MPI_Fint instead of int for the type of the Fortran
handles. So your paral
on Linux as well to see if I get the same error. One
> important piece of the puzzle is that if I configure openmpi with the
> --disable-dlopen flag, I don't have the problem. I will do some
> further testing on different systems and get back to you.
>
>
> Thanks for
On Mon, 2006-09-04 at 11:01 -0700, Tom Rosmond wrote:
> Attached is some error output from my tests of 1-sided message
> passing, plus my info file. Below are two copies of a simple fortran
> subroutine that mimics mpi_allgatherv using mpi-get calls. The top
> version fails, the bottom runs OK.
On Wed, 2006-09-06 at 10:40 -0700, Tom Rosmond wrote:
> Brian,
>
> I notice in the OMPI_INFO output the following parameters that seem
> relevant to this problem:
>
> MCA btl: parameter "btl_self_free_list_num" (current
> value: "0")
> MCA btl: parameter "btl_sel
On Sep 8, 2006, at 8:18 PM, Nuno Sucena Almeida wrote:
The other issue is the one described in
http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/
msg229867.html
(...)
gcc -O3 -DNDEBUG -fno-strict-aliasing -pthread -o .libs/opal_wrapper
opal_wrapper.o -Wl,--export-dynamic ../../.
trunk. It should eventually be migrated into the
branch for the 1.2 release once we sort out the other Alpha issues.
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
t in OpenMPI, but it sure is a subtle
> aspect of using it. I will probably document this somewhere in the
> package I am creating.
>
>
> Thanks
>
>
> Brian
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Sep 6, 2006, at 9:00 AM, Brian Barrett wrote:
>
> &g
On Tue, 2006-09-26 at 14:45 -0400, Brock Palen wrote:
> I have a code that requires that it be compiled (with the pgi
> compilers) with the -i8
>
> From the pgf90 man page:
>
> -i8Treat default INTEGER and LOGICAL variables as eight bytes.
> For operations
>involving int
On Fri, 2006-09-22 at 10:39 +0200, alberto wrote:
> When I execute the simple program:
>
> #include "mpi.h"
>
> int main(int argc, char *argv[])
> {
> MPI_Init(&argc, &argv);
> MPI_Finalize();
> return 1;
> }
>
>
> without Xgrid it succed (the execution finish).
> If I add the envir
Open MPI are you using? Also, what is the output of:
mpicc -showme
Thanks,
Brian
--
Brian Barrett
Open MPI developer
http://www.open-mpi.org/
one have a suggestion how I
can
keeps this on a nfs share and make it work? Thank you
Mac os x 10.3 cluster
-dan
On Tue, 2006-10-17 at 22:15 -0600, Brian Barrett wrote:
On Oct 17, 2006, at 6:41 PM, Dan Cardin wrote:
Hello all, I have installed openmpi on a small apple panther
cluster.
The in
10:27 -0700, Brian Barrett wrote:
Ah, are you using Open MPI 1.1.x, by chance? The wrapper compilers
need to be able to find a text file in $prefix/share/openmpi/, where
$prefix is the prefix you gave when you configured Open MPI. If that
path is different on two hosts, the wrapper compilers can
n
you have MX support and something else is going on. Otherwise, the
issue is building MX support into your copy of Open MPI.
Hope this helps,
Brian
--
Brian Barrett
Open MPI Team, CCS-1
Los Alamos National Laboratory
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Brian Barrett
Open MPI Team, CCS-1
Los Alamos National Laboratory
ersity
http://www.math.jmu.edu/~martin
phone: (+1) 540-568-5101
fax: (+1) 540-568-6857
"Ever my heart rises as we draw near the mountains.
There is good rock here." -- Gimli, son of Gloin
___
users mailing list
us...@open-mpi.org
http://ww
It's not exactly friendly that the Debian developer decided to change
the include directory for Torque from $prefix/include to $prefix/
include/torque, but I'm not sure it's "wrong".
Unfortunately, we don't handle that case properly by default. A
workaround that shouldn't give you any probl
What platform / operating system was this with?
Brian
On Feb 15, 2007, at 3:43 PM, Steven A. DuChene wrote:
I am trying to do some simple fortran MPI examples to verify I have
a good installation
of OpenMPI and I have a distributed program that calculates PI. It
seems to compile
and work fi
On Feb 27, 2007, at 3:26 PM, Iannetti, Anthony C. ((GRC-RTB0)) wrote:
Dear Open-MPI:
I am still ahving problems building OpenMPI 1.2 (now 1.2b4) on
MacOSX 10.4 PPC 64. In a message a while back, you gavce me a hack
to override this problem. I believe it was a problem with Libtool,
or
Sure, we can add a FAQ entry on that :).
At present, configure decides whether Open MPI will be installed on a
case sensitive file-system or not based on what the build file system
does. Which is far from perfect, but covers 99.9% of the cases. You
happen to be the .1%, but we do have an
Hi -
Thanks for the bug report. I've fixed the problem in SVN and it will
likely be part of the 1.2.1 release (whenever that happens). In the
mean time, I've attached a patch that should apply to the 1.2 tarball
that will also fix the problem.
The environment variables you want for spec
On Mar 20, 2007, at 3:15 PM, Mike Houston wrote:
If I only do gets/puts, things seem to be working correctly with
version
1.2. However, if I have a posted Irecv on the target node and issue a
MPI_Get against that target, MPI_Test on the posed IRecv causes a
segfaults:
Anyone have suggesti
On Mar 25, 2007, at 11:20 AM, Daniele Avitabile wrote:
Hi everybody,
I am trying to install open mpi on a Mac Os XServer, and the make
all command exits with the error
openmpi_install_failed.tar.gz
as you can see from the output files I attached.
Some comments that may be helpful:
1) I a
Once I do
some
more testing, I'll bring things up on IB and see how things are going.
-Mike
Mike Houston wrote:
Brian Barrett wrote:
On Mar 20, 2007, at 3:15 PM, Mike Houston wrote:
If I only do gets/puts, things seem to be working correctly with
version
1.2. However, if I have a
On Apr 7, 2007, at 12:59 AM, Brian Powell wrote:
Greetings,
I turn to the assistance of the OpenMPI wizards. I have compiled
v1.2 using gcc and ifort (see the attached config.log) with a
variety of options. The compilation finishes (side note: I had to
define NM otherwise the configure sc
On Apr 6, 2007, at 7:22 AM, Werner Van Geit wrote:
In our lab we are installing OpenMPI onto our Apple cluster
computer. The
cluster contains a couple of PowerPC G5 nodes and the new Intel Xeon
Xserves, all with a clean install of Mac OS X Server 10.4.8 , Xcode
2.4.1
and Sun Grid Engine 6 (s
On Apr 9, 2007, at 12:36 PM, Brian Barrett wrote:
On Apr 6, 2007, at 7:22 AM, Werner Van Geit wrote:
In our lab we are installing OpenMPI onto our Apple cluster
computer. The cluster contains a couple of PowerPC G5 nodes and
the new Intel Xeon Xserves, all with a clean install of Mac OS X
On Mar 30, 2007, at 4:23 PM, rohit_si...@logitech.com wrote:
I'm somewhat new to OpenMPI, but I'm currently evaluating it as a
communications mechanism between Windows and Unix servers.
I noticed that under your FAQs (
http://www.open-mpi.org/faq/?category=supported-systems), it says:
Th
On Apr 12, 2007, at 3:45 AM, Bas van der Vlies wrote:
Jeff Squyres wrote:
On Apr 11, 2007, at 8:08 AM, Bas van der Vlies wrote:
The OMPI_CHECK_PACKAGE macro is a rather nasty macro that tries
to reduce the replication of checking for a header then a
library, then setting CFLAGS, LDFLAGS,
That's very odd. The usual cause for this is /tmp being unwritable
by the user or full. Can you check to see if either of those
conditions are true?
Thanks,
Brian
On Apr 13, 2007, at 2:44 AM, Christine Kreuzer wrote:
Hi,
I run openmpi on a AMD Opteron with two dualcore processors an S
On Apr 27, 2007, at 6:29 AM, Götz Waschk wrote:
I'm testing my new cluster installation with the hpcc benchmark and
openmpi 1.2.1 on RHEL5 32 bit. I have some trouble with using a
threaded BLAS implementation. I have tried ATLAS 3.7.30 compiled with
pthread support. It crashes as reported here:
That is odd... Alpha Linux isn't one of our supported platforms, so
it doesn't get tested before release unless a user happens to try
it. Can you send the information requested here:
http://www.open-mpi.org/community/help/
That should help us figure out what happened.
Thanks,
Brian
On
Yup, it does. There's nothing in the standard that says it isn't
allowed to. Given the number of system/libc calls involved in doing
communication, pretty much every MPI function is going to change the
value of errno. If you expect otherwise, I'd modify your
application. Most cluster-ba
Thanks for the bug report. I'm able to replicate your problem, and
it will be fixed in the 1.2.2 release.
Brian
On May 7, 2007, at 6:10 AM, livelfs wrote:
Hi all
I have observed a regression between 1.2 and 1.2.1
if CC is assigned an absolute path (i.e. export
CC=/opt/gcc/gcc-3.4.4/bin/gc
This was a regression in Open MPI 1.2.1. We improperly handle the
situation where CC has a path in it. We will have this fixed in Open
MPI 1.2.2. For now, your options are to use Open MPI 1.2 or specify
a $CC without a path, such as CC=icc, and make sure $PATH is set
properly.
Brian
O
On May 14, 2007, at 10:21 AM, Nym wrote:
I am trying to use MPI_TYPE_STRUCT in a 64 bit Fortran 90 program. I'm
using the Intel Fortran Compiler 9.1.040 (and C/C++ compilers
9.1.045).
If I try to call MPI_TYPE_STRUCT with the array of displacements that
are of type INTEGER(KIND=MPI_ADDRESS_KIND
I fixed the OOB. I also mucked some things up with it interface wise
that I need to undo :). Anyway, I'll have a look at fixing up the
TCP component in the next day or two.
Brian
On May 10, 2007, at 6:07 PM, Jeff Squyres wrote:
Brian --
Didn't you add something to fix exactly this probl
On May 13, 2007, at 6:23 AM, Bert Wesarg wrote:
Even better: is there a patch available to fix this in the 1.2.1
tarball, so that
I can set the full path again with CC?
The patch is quite trivial, but requires a rebuild of the build
system
(autoheader, autoconf, automake,...)
see here:
htt
On May 21, 2007, at 7:40 PM, Tom Clune wrote:
Executive summary: mpirun hangs when laptop is connected via
cellular modem.
Longer description: Under ordinary circumstances mpirun behaves as
expected on my OS X (Intel-duo) laptop. I only want to be using
the shared-memory mechanism - i.e
On May 22, 2007, at 7:52 PM, Tom Clune wrote:
For example, if it is ppp0, try:
mpirun -np 1 -mca oob_tcp_exclude ppp0 uptime
This seems to at least produce a bit of output before hanging:
LM000953070:~ tlclune$ mpirun -np 1 -mca oob_tcp_exclude ppp0 uptime
[153.sub-70-211-6.myvzw.com:0756
On May 29, 2007, at 12:25 PM, smai...@ksu.edu wrote:
I am doing a research on parallel computing on shared memory with
NUMA architecture. The system is a 4 node AMD opteron with each node
being a dual-core. I am testing an OpenMPI program with MPI-nodes <=
MAX cores available on system (in
Bill -
This is a known issue in all released versions of Open MPI. I have a
patch that hopefully will fix this issue in 1.2.3. It's currently
waiting on people in the OPen MPI team to verify I didn't do
something stupid.
Brian
On May 29, 2007, at 9:59 PM, Bill Saphir wrote:
George,
On Jun 1, 2007, at 12:15 PM, Bert Wesarg wrote:
Hello,
is the 'EGREP' a typo in the first hunk of r14829:
https://svn.open-mpi.org/trac/ompi/changeset/14829/trunk/config/
cxx_find_template_repository.m4
Gah! Yes, it is. Should be $GREP. I'll fix this evening.
Thanks,
Brian
Hi Rich -
All the releases back to the 0.9 pre-release included a #define of
OPEN_MPI to 1 in mpi.h, so that would be a good way to find out if
you are using Open MPI or not.
Hope this helps,
Brian
On Jun 5, 2007, at 1:36 PM, Lie-Quan Lee wrote:
I was thinking of this way. Is macro OPEN
Or tell Open MPI not to build torque support, which can be done at
configure time with the --without-tm option.
Open MPI tries to build support for whatever it finds in the default
search paths, plus whatever things you specify the location of. Most
of the time, this is what the user wants
201 - 274 of 274 matches
Mail list logo