The attached example code (stripped down from a bigger app) demonstrates
a way to trigger a severe crash in all recent ompi releases but not in a
bunch of latest MPICH2 releases. The code is minimalistic and boils down
to the call
MPI_Comm_create(MPI_COMM_WORLD, MPI_GROUP_EMPTY, &dummy_comm);
ducer.
I've confirmed the problem and filed a bug about this:
https://svn.open-mpi.org/trac/ompi/ticket/2752
On Mar 6, 2011, at 6:12 PM, Dominik Goeddeke wrote:
The attached example code (stripped down from a bigger app) demonstrates a way
to trigger a severe crash in all recent omp
Hi,
for what it's worth: Same thing happens with DDT. OpenMPI 1.2.x runs
fine, later versions (at least 1.4.x and newer) let DDT bail out with
"Could not break at function MPIR_Breakpoint".
DDT has something like "OpenMPI (compatibility mode)" in its session
launch dialog, with this setting
nt is physically not present in the library?
On Mar 21, 2011, at 2:50 PM, Dominik Goeddeke wrote:
Hi,
for what it's worth: Same thing happens with DDT. OpenMPI 1.2.x runs fine, later versions
(at least 1.4.x and newer) let DDT bail out with "Could not break at function
MPIR_Breakp
no idea of Rocks, but with PBS and SLURM, I always do this directly in
the job submission script. Below is an example of an admittedly
spaghetti-code script that does this -- assuming proper (un)commenting
-- for PBS and SLURM and OpenMPI and MPICH2, for one particular machine
that I have been
in the "old" 1.4.x and 1.5.x, I achieved this by using rankfiles (see
FAQ), and it worked very well. With these versions, --byslot etc. didn't
work for me, I always needed the rankfiles. I haven't tried the
overhauled "convenience wrappers" in 1.6 that you are using for this
feature yet, but I
shameless plug:
http://www.mathematik.tu-dortmund.de/~goeddeke/pubs/pdf/Goeddeke_2012_EEV.pdf
In the MontBlanc project (www.montblanc-project.eu), a lot of folks from
all around Europe look into exactly this. Together with a few
colleagues, we have been honoured to get access to an early proto
Dear OMPI folks,
according to this FAQ entry
http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0
one way to use the mpif90 compiler wrapper with another compiler than
Open-MPI was built with is to set the envvar OMPI_FC to the other compiler.
Using this simple toy cod
riginal Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Dominik Goeddeke
Sent: Tuesday, March 12, 2013 10:32 PM
To: Open MPI Users
Subject: [OMPI users] bug in mpif90? OMPI_FC envvar does not work with
'use mpi'
Dear OMPI folks,
accordin
Yes, sure. My point is just that "strongly discouraged" (as per the FAQ)
is different from "simply will not work at all". I find that a bit
confusing, especially since in other areas of the FAQ, explicit
workarounds are stated, e.g. on how to build a Makefile rule to extract
flags from an mpiwr
with Open MPI is to install Open
MPI multiple times; each installation should be built/installed with a
different compiler. This is annoying, but it is beyond the scope of Open MPI to
be able to fix.
-
On Mar 13, 2013, at 5:44 AM, Dominik Goeddeke
wrote:
Yes, sure. My point is just that &
11 matches
Mail list logo