Troy and I talked about this one off-list as well and resolved the issue
as problems with his local IB fabric.
The moral of the lesson here is that Open MPI's error messages need to
be a bit more descriptive (in this case, they should have said, "Help!
The sky is falling, the sky is falling!").
Dear all,
mpirun -d -np 1 vhone
yields the following output:
[powerbook.2-net:06956] procdir: (null)
[powerbook.2-net:06956] jobdir: (null)
[powerbook.2-net:06956] unidir:
/tmp/openmpi-sessions-admin@powerbook.2-net_0/default-universe
[powerbook.2-net:06956] top: openmpi-sessions-admin@pow
Troy and I talked about this off-list and resolved that the issue was
with the TCP setup on the nodes.
But it is worth noting that we had previously fixed a bug in the TCP
setup in 1.0.2 with respect to the SEGVs that Troy was seeing -- hence,
when he tested the 1.0.3 prerelease tarballs, there we
I was able to build OMPI (1.1a9r10177) with nag f95 5.0(414) with
out any problems. To configure it be sure to use:
FCFLAGS='-mismatch -w' That is the only really big change, I did
use a prefix path to pbs (for tm) I also use portland for both my c
and c++ compiler. Here if my full
Hi Ali
I'm unaware of any firm plans to port either Open MPI or the underlying
OpenRTE run-time to VxWorks, though I have had some initial contact
from others interested in possibly doing so with the latter (part of a
virtual telescope control and data processing system). Being fairly
familiar
On Fri, 02 Jun 2006 13:37:07 -0600, Jeff Squyres (jsquyres)
wrote:
Troy --
Just to make sure I understand the issues:
- 1.1
- presta com works fine
- presta allred fails with the MPI_Gather error
- 1.0.3
- presta com fails with MPI_Gather error
- presta allred fails with the MPI_Gat
Troy --
Just to make sure I understand the issues:
- 1.1
- presta com works fine
- presta allred fails with the MPI_Gather error
- 1.0.3
- presta com fails with MPI_Gather error
- presta allred fails with the MPI_Gather error
And these all *only* fail on the pre-production Linux version
Hello, I have tried openmpi-1.1a9r10177 and make is still crashing at
the same point, although the error message has change as shown in the
next snippet. I've attached the config.log, config.out and make.out
make[4]: Entering directory
`/opt/openmpi/openmpi-1.1a9r10177.bld/ompi/mpi/f90'
./script
Hello,
Looking at the OpenMPI web site, I couldn't find any reference to support
for VxWorks.
Here are my questions:
- Is there any plan for OpenMPI to run on VxWorks?
- Does anyone has ported/customized OpenMPI to work on VxWorks?
- What level of effort does it take to
On Thu, 01 Jun 2006 17:49:53 -0600, Troy Telford
wrote:
the 'com' test ends with:
[n1:04941] *** An error occurred in MPI_Gather
[n1:04941] *** on communicator MPI_COMM_WORLD
[n1:04941] *** MPI_ERR_ARG: invalid argument of some other kind
[n1:04941] *** MPI_ERRORS_ARE_FATAL (goodbye)
And yes
On Fri, 02 Jun 2006 09:41:30 -0600, Ralph Castain wrote:
I don't remember if it has to be explicitly enabled or not - or if it
only started with a particular 2.6.x release. You might want to check
into it.
There are other factors invovled as well (BIOS settings, for instance).
The kerne
Hi Troy
I'm not sure what iteration of Linux you are using, but the 2.6 kernel
has multi-core scheduling support that is supposed to resolve this
problem. I don't remember if it has to be explicitly enabled or not -
or if it only started with a particular 2.6.x release. You might want
to check
On Fri, 02 Jun 2006 09:15:06 -0600, Troy Telford
wrote:
Can you confirm that your Linux installation thinks that it has 4
processors and will schedule 4 processes simultaneously?
D'oh. Still too early in the morning...
OK, Linux thinks it has two CPUs. Period.
For some reason I forgot tha
On Thu, 01 Jun 2006 18:07:07 -0600, Jeff Squyres (jsquyres)
wrote:
This *sounds* like the classic oversubscription problem: Open MPI's
aggressive vs. degraded operating modes:
http://www.open-mpi.org/faq/?category=running#oversubscribing
Good link; bookmarked for (internal) documentation..
--with-mpi-f90-size=SIZE
specifies the size of the Fortran 90 interface module that is
created. In Fortran 90 the compiler can validate all your calls only
if it has information of the functions/subroutines that you are
calling. This is done via a module with interfaces in OpenMPI and
On Thursday 25 May 2006 19:02, Tom Rosmond wrote:
> I didn't do a formal uninstall as you demonstate below, but instead went
> into the 'prefix' directory and renamed 'bin','lib','etc','include', and
> 'share'
A full uninstall is of course not needed, but it might be cleaner to simply
install in
On Jun 1, 2006, at 12:42 PM, Jeff Squyres (jsquyres) wrote:
Blast. As usual, Michael is right -- we didn't account for
MPI_IN_PLACE
in the "large" F90 interface. We've opened ticket #39 on this:
https://svn.open-mpi.org/trac/ompi/ticket/39
I'm inclined to simply disable the "large" inte
Jeff,
Ok, this solved the problem with the Pathscale compiler.
Thanks
-- Jan
Message: 2
Date: Thu, 1 Jun 2006 17:37:36 -0400
From: "Jeff Squyres \(jsquyres\)"
Subject: Re: [OMPI users] openmpi-1.1a9r10157 Fails to build with Nag
f95Compiler
To: "Open MPI Users"
Message-ID:
Hi,
openmpi-1.1a9r10157's fortran bindings also fail to build with the
Pathscale 2.1 pathf90 compiler. At the same spot but with different
error messages (see below), which perhaps helps to clarify things. Any
help greatly appreciated as well.
Best regards,
Jan De Laet
Hi jeff,
I am using open MPI 1.1 alpha 7 release.
My MPI process with threads was not terminating even when i send SIGINT thru
key board. I shall check it agian anyhow and get back to you.
I am terminating all MPI actions in threads ( my thread waits on MPI::Recv
and i send data to the MPI
20 matches
Mail list logo