Re: [OMPI users] (no subject)

2013-10-08 Thread Iliev, Hristo
Hi,

When all processes run on the same node they communicate via shared memory
which delivers both high bandwidth and low latency. InfiniBand is slower and
more latent than shared memory. Your parallel algorithm might simply be very
latency sensitive and you should profile it with something like mpiP or
Vampir/VampirTrace in order to find why and only then try to further tune
Open MPI.

Hope that helps,
Hristo

From: users [mailto:users-boun...@open-mpi.org] On Behalf Of San B
Sent: Monday, October 07, 2013 8:46 AM
To: OpenMPI ML
Subject: [OMPI users] (no subject)

Hi,
I'm facing a  performance issue with a scientific application(Fortran). The
issue is, it runs faster on single node but runs very slow on multiple
nodes. For example, a 16 core job on single node finishes in 1hr 2mins, but
the same job on two nodes (i.e. 8 cores per node & remaining 8 cores kept
free) takes 3hr 20mins. The code is compiled with ifort-13.1.1,
openmpi-1.4.5 and intel MKL libraries - lapack, blas, scalapack, blacs &
fftw. What could be the problem here with?
Is it possible to do any tuning in OpenMPI? FY More info: The cluster has
Intel Sandybridge processor (E5-2670), infiniband and Hyperthreading is
Enabled. Jobs are submitted thru LSF scheduler.
Does HyperThreading causing any problem here?

Thanks

--
Hristo Iliev, PhD – High Performance Computing Team
RWTH Aachen University, Center for Computing and Communication
Rechen- und Kommunikationszentrum der RWTH Aachen
Seffenter Weg 23, D 52074 Aachen (Germany)
Phone: +49 241 80 24367 – Fax/UMS: +49 241 80 624367



smime.p7s
Description: S/MIME cryptographic signature


[OMPI users] MPI_IN_PLACE with GATHERV, AGATHERV, and SCATERV

2013-10-08 Thread Gerlach, Charles A.
I have an MPI code that was developed using MPICH1 and OpenMPI before the MPI2 
standards became commonplace (before MPI_IN_PLACE was an option).

So, my code has many examples of GATHERV, AGATHERV and SCATTERV, where I pass 
the same array in as the SEND_BUF and the RECV_BUF, and this has worked fine 
for many years.

Intel MPI and MPICH2 explicitly disallow this behavior according to the MPI2 
standard. So, I have gone through and used MPI_IN_PLACE for all the 
GATHERV/SCATTERVs that used to pass the same array twice. This code now works 
with MPICH2 and Intel_MPI, but fails with OpenMPI-1.6.5 on multiple platforms 
and compilers.

PLATFORM  COMPILERSUCCESS? (For at least one simple 
example)

SLED 12.3 (x86-64) - Portland group  - fails
SLED 12.3 (x86-64) - g95 - fails
SLED 12.3 (x86-64) - gfortran   - works

OS X 10.8 -- intel-fails


In every case where OpenMPI fails with the MPI_IN_PLACE code, I can go back to 
the original code that passes the same array twice instead of using 
MPI_IN_PLACE, and it is fine.

I have made a test case doing an individual GATHERV with MPI_IN_PLACE, and it 
works with OpenMPI.  So it looks like there is some interaction with my code 
that is causing the problem. I have no idea how to go about trying to debug it.


In summary:

OpenMPI-1.6.5 crashes my code when I use GATHERV, AGATHERV, and SCATTERV with 
MPI_IN_PLACE.
Intel MPI and MPICH2 work with my code when I use GATHERV, AGATHERV, and 
SCATTERV with MPI_IN_PLACE.

OpenMPI-1.6.5 works with my code when I pass the same array to SEND_BUF and 
RECV_BUF instead of using MPI_IN_PLACE for those same GATHERV, AGATHERV, and 
SCATTERVs.


-Charles


Re: [OMPI users] MPI_IN_PLACE with GATHERV, AGATHERV, and SCATERV

2013-10-08 Thread Jeff Hammond
"I have made a test case..." means there is little reason not to
attach said test case to the email for verification :-)

The following is in mpi.h.in in the OpenMPI trunk.

=
/*
 * Just in case you need it.  :-)
 */
#define OPEN_MPI 1

/*
 * MPI version
 */
#define MPI_VERSION 2
#define MPI_SUBVERSION 2
=

Two things can be said from this:

(1) You can workaround this non-portable awfulness with the C
preprocess by testing for the OPEN_MPI symbol.

(2) OpenMPI claims to be compliant with the MPI 2.2 standard, hence
any failures to adhere to the behavior specified in that document for
MPI_IN_PLACE is erroneous.

Best,

Jeff

On Tue, Oct 8, 2013 at 2:40 PM, Gerlach, Charles A.
 wrote:
> I have an MPI code that was developed using MPICH1 and OpenMPI before the
> MPI2 standards became commonplace (before MPI_IN_PLACE was an option).
>
>
>
> So, my code has many examples of GATHERV, AGATHERV and SCATTERV, where I
> pass the same array in as the SEND_BUF and the RECV_BUF, and this has worked
> fine for many years.
>
>
>
> Intel MPI and MPICH2 explicitly disallow this behavior according to the MPI2
> standard. So, I have gone through and used MPI_IN_PLACE for all the
> GATHERV/SCATTERVs that used to pass the same array twice. This code now
> works with MPICH2 and Intel_MPI, but fails with OpenMPI-1.6.5 on multiple
> platforms and compilers.
>
>
>
> PLATFORM  COMPILERSUCCESS? (For at least one
> simple example)
>
> 
>
> SLED 12.3 (x86-64) – Portland group  - fails
>
> SLED 12.3 (x86-64) – g95 - fails
>
> SLED 12.3 (x86-64) – gfortran   - works
>
>
>
> OS X 10.8 -- intel-fails
>
>
>
>
>
> In every case where OpenMPI fails with the MPI_IN_PLACE code, I can go back
> to the original code that passes the same array twice instead of using
> MPI_IN_PLACE, and it is fine.
>
>
>
> I have made a test case doing an individual GATHERV with MPI_IN_PLACE, and
> it works with OpenMPI.  So it looks like there is some interaction with my
> code that is causing the problem. I have no idea how to go about trying to
> debug it.
>
>
>
>
>
> In summary:
>
>
>
> OpenMPI-1.6.5 crashes my code when I use GATHERV, AGATHERV, and SCATTERV
> with MPI_IN_PLACE.
>
> Intel MPI and MPICH2 work with my code when I use GATHERV, AGATHERV, and
> SCATTERV with MPI_IN_PLACE.
>
>
>
> OpenMPI-1.6.5 works with my code when I pass the same array to SEND_BUF and
> RECV_BUF instead of using MPI_IN_PLACE for those same GATHERV, AGATHERV, and
> SCATTERVs.
>
>
>
>
>
> -Charles
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com