Good. This is fixed in Open MPI 1.7.3 by the way. I will add note to FAQ on
building Open MPI 1.7.2.
>-Original Message-
>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Hammond,
>Simon David (-EXP)
>Sent: Monday, October 07, 2013 4:17 PM
>To: Open MPI Users
>Subject: Re: [
Thanks Rolf, that seems to have made the code compile and make
successfully.
S.
--
Simon Hammond
Scalable Computer Architectures (CSRI/146, 01422)
Sandia National Laboratories, NM, USA
On 10/7/13 1:47 PM, "Rolf vandeVaart" wrote:
>That might be a bug. While I am checking, you could try
That might be a bug. While I am checking, you could try configuring with this
additional flag:
--enable-mca-no-build=pml-bfo
Rolf
>-Original Message-
>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Hammond,
>Simon David (-EXP)
>Sent: Monday, October 07, 2013 3:30 PM
>To:
Hey everyone,
I am trying to build OpenMPI 1.7.2 with CUDA enabled, OpenMPI will
configure successfully but I am seeing a build error relating to the
inclusion of the CUDA options (at least I think so). Do you guys know if
this is a bug or whether something is wrong with how we are configuring
Ope
Hi,
Am 07.10.2013 um 08:45 schrieb San B:
> I'm facing a performance issue with a scientific application(Fortran). The
> issue is, it runs faster on single node but runs very slow on multiple nodes.
> For example, a 16 core job on single node finishes in 1hr 2mins, but the same
> job on two n
Hi,
I'm facing a performance issue with a scientific application(Fortran). The
issue is, it runs faster on single node but runs very slow on multiple
nodes. For example, a 16 core job on single node finishes in 1hr 2mins, but
the same job on two nodes (i.e. 8 cores per node & remaining 8 cores ke