[OMPI users] Persistent Communication using MPI_SEND_INIT, MPI_RECV_INIT etc.

2013-03-25 Thread Timothy Stitt
Hi all, I've inherited a MPI code that was written ~8-10 years ago and it predominately uses MPI persistent communication routines for data transfers e.g. MPI_SEND_INIT, MPI_RECV_INIT, MPI_START etc. I was just wondering if using persistent communication calls is still the most efficient/scala

Re: [OMPI users] MPI_TAG_UB printing zero with Intel Compiler

2012-03-14 Thread Timothy Stitt
ssors (MPI_COMM_GET_ATTR page 229 in the MPI 2.2 standard). george. On Mar 13, 2012, at 13:51 , Timothy Stitt wrote: Hi Jeff, I went through the procedure of compiling and running, then copied the procedure verbatim from the command line (see below). [tstitt@memtfe] /pscratch/tstit

Re: [OMPI users] MPI_TAG_UB printing zero with Intel Compiler

2012-03-13 Thread Timothy Stitt
Jeffrey Squyres wrote: Tim -- I am unable to replicate this problem with a 1.4 build with icc. Can you share your test code? On Mar 10, 2012, at 7:30 PM, Timothy Stitt wrote: Hi all, I was experimenting with MPI_TAG_UB in my code recently and found that its value is set to 0 in my v1.4.3 an

[OMPI users] MPI_TAG_UB printing zero with Intel Compiler

2012-03-10 Thread Timothy Stitt
Hi all, I was experimenting with MPI_TAG_UB in my code recently and found that its value is set to 0 in my v1.4.3 and v1.4.5 builds (these are the only builds I have available to test). Oddly it only happens with my builds compiled with the Intel compiler. The PGI builds do produce a non-zero

Re: [OMPI users] AlltoallV (with some zero send count values)

2012-03-06 Thread Timothy Stitt
Thanks Jed for the advice. How well-implemented are the one-sided communication routines? Are they something that could be trusted in a production code? Sent from my iPad On Mar 6, 2012, at 6:06 PM, "Jed Brown" mailto:j...@59a2.org>> wrote: On Tue, Mar 6, 2012 at 15:4

[OMPI users] AlltoallV (with some zero send count values)

2012-03-06 Thread Timothy Stitt
Hi all, Can anyone tell me whether it is legal to pass zero values for some of the send count elements in an MPI_AlltoallV() call? I want to perform an all-to-all operation but for performance reasons do not want to send data to various ranks who don't need to receive any useful values. If it i

Re: [OMPI users] Scatter+Group Communicator Issue

2012-03-06 Thread Timothy Stitt
Mar 6, 2012, at 10:19 AM, Timothy Stitt wrote: Will definitely try that. Thanks for the suggestion. Basically, I need to be able to scatter values from a sender to a subset of ranks (as I scale my production code, I don't want to use MPI_COMM_WORLD, as the receiver list will be quite

Re: [OMPI users] Scatter+Group Communicator Issue

2012-03-06 Thread Timothy Stitt
collectives by a: if (MPI_COMM_NULL != new_comm) { } could be enough. But may be I'm wrong: I'll let the specialists answer. Regards, Nadia -- Nadia Derbey users-boun...@open-mpi.org<mailto:users-boun...@open-mpi.org> wrote on 03/06/2012 02:32:03 PM: > De : Tim

Re: [OMPI users] Scatter+Group Communicator Issue

2012-03-06 Thread Timothy Stitt
<mailto:users-boun...@open-mpi.org> wrote on 03/06/2012 01:52:06 PM: > De : Timothy Stitt mailto:timothy.stit...@nd.edu>> > A : "us...@open-mpi.org<mailto:us...@open-mpi.org>" > mailto:us...@open-mpi.org>> > Date : 03/06/2012 01:52 PM > Objet : [OMP

[OMPI users] Scatter+Group Communicator Issue

2012-03-06 Thread Timothy Stitt
Hi all, I am scratching my head over what I think should be a relatively simple group communicator operation. I am hoping some kind person can put me out of my misery and figure out what I'm doing wrong. Basically, I am trying to scatter a set of values to a subset of process ranks (hence the

[OMPI users] MPI_ALLREDUCE: Segmentation Fault

2011-06-02 Thread Timothy Stitt
Hi all, We have a code built with OpenMPI (v1.4.3) and the Intel v12.0 compiler that has been tested successfully on 10s - 100s of cores on our cluster. We recently ran the same code with 1020 cores and received the following runtime error: > [d6cneh042:28543] *** Process received signal *** >