This *sounds* like the classic oversubscription problem: Open MPI's
aggressive vs. degraded operating modes:
http://www.open-mpi.org/faq/?category=running#oversubscribing
Specifically, "slots" is *not* meant to be the number of processes to
run. It's meant to be how many processors are available
I generally build Open MPI from a source rpm (and I'm the author of that
srpm's spec file). That way, Open MPI is built consistently between linux
distros...
I'm running into an issue that works on one distro; breaks on another.
I'd like to track down where the bug is (the distro, or Open
I'm hoping this is just user error...
I'm running a single-node job with a node that has two dual-core opterons
(Open MPI 1.0.2).
compiler=gcc 4.1.0
arch=x86_64 (64-bit)
OS=linux 2.6.16
My machine file looked like this:
node1 slots=4
I have an HPL configuration for 4 processors (PxQ=2
Here's a snipit from the README file:
- The Fortran 90 MPI bindings can now be built in one of four sizes
using --with-mpi-f90-size=SIZE (see description below). These sizes
reflect the number of MPI functions included in the "mpi" Fortran 90
module and therefore which functions will be sub
Greetings.
This was actually reported earlier today (off list). It was the result
of a botched merge from the trunk to the v1.1 branch. I have fixed the
issue as of r10171 (it was a one-line mistake); the fix should show up
in the snapshot tarballs tonight.
> -Original Message-
> From
Hi, my program is giving me this error on one particular machine:
[localhost.localdomain:04889] mca_btl_sm_component_init: mkfifo failed
with errno=17
Errno 17 appears to be File exists. Any ideas why this might be
happening? My program is spawing another module, is there some reason
it wou
Hi, I'm using the NAGWare Fortran 95 compiler Release 5.0(414), but make
fails as shown in the snippet below. I've attached the config.log,
config.out and make.out files. The system is a dual processor Opteron
server running a 2.6 x86_64 linux kernel and has a myrinet mx based
interconnect which
What are these "small" and "large" modules? What would they provide?
Brock
On Jun 1, 2006, at 4:30 PM, Jeff Squyres ((jsquyres)) wrote:
Michael --
You're right again. Thanks for keeping us honest!
We clearly did not think through all the issues for the "large" F90
interface; I've opened ti
On Wed, 31 May 2006 20:17:33 -0600, Brian Barrett
wrote:
Did you happen to have a chance to try to run the 1.0.3 or 1.1
nightly tarballs? I'm 50/50 on whether we've fixed these issues
already.
For Ticket #41:
Using Open MPI 1.0.3 and 1.1:
For some reason, I can't seem to get TCP to work w
Michael --
You're right again. Thanks for keeping us honest!
We clearly did not think through all the issues for the "large" F90
interface; I've opened ticket #55 for the issue. I'm inclined to take
the same approach as for the other issues you discovered -- disable
"large" for v1.1 and push th
Did you happen to have a chance to try to run the 1.0.3 or 1.1
nightly tarballs? I'm 50/50 on whether we've fixed these issues
already.
OK, for ticket #40:
With Open MPI 1.0.3 (nightly downloaded/built May 31st)
(This time using presta's 'laten', since the source code + comments is <
1k line
1. Starting from scratch is probably easiest. If you installed Open MPI
to its own directory, just remove the installation directory. If you
installed Open MPI to a directory that contains other things, a "make
uninstall" in your original Open MPI source tree should completely
uninstall it proper
Blast. As usual, Michael is right -- we didn't account for MPI_IN_PLACE
in the "large" F90 interface. We've opened ticket #39 on this:
https://svn.open-mpi.org/trac/ompi/ticket/39
I'm inclined to simply disable the "large" interfaces in v1.1 so that we
can get it out the door, and work on fix
13 matches
Mail list logo