I'm trying to build openmpi v.1.6.4 using a local build of gcc 4.7.2 on
rhel6.
The configure and build scripts are attached. The config.log and
build.output are attached.
The last few lines of the build output is:
make[3]: Entering directory
`/nm/programs/third_party/tmp-install/openmpi-1.6.4-bl
I'm trying to build v 1.6.4 with a local install of gcc 4.7.2.
I am trying to use the following script
OWD=$PWD
GMPD=$OWD/gmp-4.3.2
MPFRD=$OWD/mpfr-2.4.2
MPCD=$OWD/mpc-0.8.1
PPLD=$OWD/ppl-0.11
CLOOG=$OWD/cloog-ppl-0.15.9
GCC=$OWD/gcc-4.7.2-rhel5
export
LD_LIBRARY_PATH=$GCC/lib64:$GMPD/lib:$MPFR
Thanks Ralph!
After I removed --without-memory-manager --with-memory-manager=no, it
built fine.
Limin
On Thu, Apr 4, 2013 at 3:26 PM, Ralph Castain wrote:
> Fix is coming - it is the --without-memory-manager option (which is the
> same thing as --with-memory-manager=no) that is breaking it.
Sounds like something is making the TCP connections unstable. Last time I
looked at HVM, they were running something like 64G of memory? If you have more
than one proc on a node (as your output would indicate), and you are doing
collectives on such large data sizes, it's quite possible you are r
Fix is coming - it is the --without-memory-manager option (which is the same
thing as --with-memory-manager=no) that is breaking it.
On Apr 4, 2013, at 12:19 PM, Limin Gu wrote:
> Hi all,
>
> I downloaded openmpi-1.7, but it failed to build on Centos 6.4 with following
> error:
>
> make[10]
Hi all,
I downloaded openmpi-1.7, but it failed to build on Centos 6.4 with
following error:
make[10]: Entering directory
`/root/openmpi/openmpi-1.7/ompi/contrib/vt/vt/extlib/otf/tools/otfmerge/mpi'
CC otfmerge_mpi-handler.o
CC otfmerge_mpi-otfmerge.o
CCLD otfmerge-mpi
/root
Hi,
I am running some matrix-algebra-based calculations on Amazon EC2 (HVM
instances running Ubuntu 11.1 with OpenMPI 1.6.4 and python bindings with
mpi4py 1.3). I am using StarCluster to spin up instances so all nodes from
a given cluster are in the same placement group (i.e. are on the same 10
Hi,
Am 04.04.2013 um 04:35 schrieb Ed Blosch:
> Consider this Fortran program snippet:
>
> program test
use omp_lib
include 'mpif.h'
might be missing.
> ! everybody except rank=0 exits.
> call mpi_init(ierr)
> call mpi_comm_rank(MPI_COMM_WORLD,irank,ierr)
> if (irank /= 0) then
>call