Re: [OMPI users] bug in mpif90? OMPI_FC envvar does not work with 'use mpi'

2013-03-13 Thread Dominik Goeddeke
with Open MPI is to install Open MPI multiple times; each installation should be built/installed with a different compiler. This is annoying, but it is beyond the scope of Open MPI to be able to fix. - On Mar 13, 2013, at 5:44 AM, Dominik Goeddeke wrote: Yes, sure. My point is just that &

Re: [OMPI users] bug in mpif90? OMPI_FC envvar does not work with 'use mpi'

2013-03-13 Thread Dominik Goeddeke
Yes, sure. My point is just that "strongly discouraged" (as per the FAQ) is different from "simply will not work at all". I find that a bit confusing, especially since in other areas of the FAQ, explicit workarounds are stated, e.g. on how to build a Makefile rule to extract flags from an mpiwr

Re: [OMPI users] bug in mpif90? OMPI_FC envvar does not work with 'use mpi'

2013-03-13 Thread Dominik Goeddeke
riginal Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Dominik Goeddeke Sent: Tuesday, March 12, 2013 10:32 PM To: Open MPI Users Subject: [OMPI users] bug in mpif90? OMPI_FC envvar does not work with 'use mpi' Dear OMPI folks, accordin

[OMPI users] bug in mpif90? OMPI_FC envvar does not work with 'use mpi'

2013-03-12 Thread Dominik Goeddeke
Dear OMPI folks, according to this FAQ entry http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0 one way to use the mpif90 compiler wrapper with another compiler than Open-MPI was built with is to set the envvar OMPI_FC to the other compiler. Using this simple toy cod

Re: [OMPI users] cluster with iOS or Android devices?

2012-11-28 Thread Dominik Goeddeke
shameless plug: http://www.mathematik.tu-dortmund.de/~goeddeke/pubs/pdf/Goeddeke_2012_EEV.pdf In the MontBlanc project (www.montblanc-project.eu), a lot of folks from all around Europe look into exactly this. Together with a few colleagues, we have been honoured to get access to an early proto

Re: [OMPI users] openmpi tar.gz for 1.6.1 or 1.6.2

2012-07-16 Thread Dominik Goeddeke
in the "old" 1.4.x and 1.5.x, I achieved this by using rankfiles (see FAQ), and it worked very well. With these versions, --byslot etc. didn't work for me, I always needed the rankfiles. I haven't tried the overhauled "convenience wrappers" in 1.6 that you are using for this feature yet, but I

Re: [OMPI users] automatically creating a machinefile

2012-07-04 Thread Dominik Goeddeke
no idea of Rocks, but with PBS and SLURM, I always do this directly in the job submission script. Below is an example of an admittedly spaghetti-code script that does this -- assuming proper (un)commenting -- for PBS and SLURM and OpenMPI and MPICH2, for one particular machine that I have been

Re: [OMPI users] Displaying MAIN in Totalview

2011-03-22 Thread Dominik Goeddeke
nt is physically not present in the library? On Mar 21, 2011, at 2:50 PM, Dominik Goeddeke wrote: Hi, for what it's worth: Same thing happens with DDT. OpenMPI 1.2.x runs fine, later versions (at least 1.4.x and newer) let DDT bail out with "Could not break at function MPIR_Breakp

Re: [OMPI users] Displaying MAIN in Totalview

2011-03-21 Thread Dominik Goeddeke
Hi, for what it's worth: Same thing happens with DDT. OpenMPI 1.2.x runs fine, later versions (at least 1.4.x and newer) let DDT bail out with "Could not break at function MPIR_Breakpoint". DDT has something like "OpenMPI (compatibility mode)" in its session launch dialog, with this setting

Re: [OMPI users] Potential bug in creating MPI_GROUP_EMPTY handling

2011-03-17 Thread Dominik Goeddeke
ducer. I've confirmed the problem and filed a bug about this: https://svn.open-mpi.org/trac/ompi/ticket/2752 On Mar 6, 2011, at 6:12 PM, Dominik Goeddeke wrote: The attached example code (stripped down from a bigger app) demonstrates a way to trigger a severe crash in all recent omp

[OMPI users] Potential bug in creating MPI_GROUP_EMPTY handling

2011-03-06 Thread Dominik Goeddeke
The attached example code (stripped down from a bigger app) demonstrates a way to trigger a severe crash in all recent ompi releases but not in a bunch of latest MPICH2 releases. The code is minimalistic and boils down to the call MPI_Comm_create(MPI_COMM_WORLD, MPI_GROUP_EMPTY, &dummy_comm);