Re: [OMPI users] MPI_IN_PLACE with GATHERV, AGATHERV, and SCATERV

2013-10-08 Thread Jeff Hammond
"I have made a test case..." means there is little reason not to attach said test case to the email for verification :-) The following is in mpi.h.in in the OpenMPI trunk. = /* * Just in case you need it. :-) */ #define OPEN_MPI 1 /* * MPI version */ #define MPI_VERS

[OMPI users] MPI_IN_PLACE with GATHERV, AGATHERV, and SCATERV

2013-10-08 Thread Gerlach, Charles A.
I have an MPI code that was developed using MPICH1 and OpenMPI before the MPI2 standards became commonplace (before MPI_IN_PLACE was an option). So, my code has many examples of GATHERV, AGATHERV and SCATTERV, where I pass the same array in as the SEND_BUF and the RECV_BUF, and this has worked f

Re: [OMPI users] (no subject)

2013-10-08 Thread Iliev, Hristo
Hi, When all processes run on the same node they communicate via shared memory which delivers both high bandwidth and low latency. InfiniBand is slower and more latent than shared memory. Your parallel algorithm might simply be very latency sensitive and you should profile it with something like m