Hi all,
I've inherited a MPI code that was written ~8-10 years ago and it predominately
uses MPI persistent communication routines for data transfers e.g.
MPI_SEND_INIT, MPI_RECV_INIT, MPI_START etc. I was just wondering if using
persistent communication calls is still the most efficient/scala
ssors (MPI_COMM_GET_ATTR page 229 in the
MPI 2.2 standard).
george.
On Mar 13, 2012, at 13:51 , Timothy Stitt wrote:
Hi Jeff,
I went through the procedure of compiling and running, then copied the
procedure verbatim from the command line (see below).
[tstitt@memtfe] /pscratch/tstit
Jeffrey Squyres wrote:
Tim --
I am unable to replicate this problem with a 1.4 build with icc.
Can you share your test code?
On Mar 10, 2012, at 7:30 PM, Timothy Stitt wrote:
Hi all,
I was experimenting with MPI_TAG_UB in my code recently and found that its
value is set to 0 in my v1.4.3 an
Hi all,
I was experimenting with MPI_TAG_UB in my code recently and found that its
value is set to 0 in my v1.4.3 and v1.4.5 builds (these are the only builds I
have available to test).
Oddly it only happens with my builds compiled with the Intel compiler. The PGI
builds do produce a non-zero
Thanks Jed for the advice. How well-implemented are the one-sided communication
routines? Are they something that could be trusted in a production code?
Sent from my iPad
On Mar 6, 2012, at 6:06 PM, "Jed Brown" mailto:j...@59a2.org>>
wrote:
On Tue, Mar 6, 2012 at 15:4
Hi all,
Can anyone tell me whether it is legal to pass zero values for some of the send
count elements in an MPI_AlltoallV() call? I want to perform an all-to-all
operation but for performance reasons do not want to send data to various ranks
who don't need to receive any useful values. If it i
Mar 6, 2012, at 10:19 AM, Timothy Stitt wrote:
Will definitely try that. Thanks for the suggestion.
Basically, I need to be able to scatter values from a sender to a subset of
ranks (as I scale my production code, I don't want to use MPI_COMM_WORLD, as
the receiver list will be quite
collectives by a:
if (MPI_COMM_NULL != new_comm) {
}
could be enough.
But may be I'm wrong: I'll let the specialists answer.
Regards,
Nadia
--
Nadia Derbey
users-boun...@open-mpi.org<mailto:users-boun...@open-mpi.org> wrote on
03/06/2012 02:32:03 PM:
> De : Tim
<mailto:users-boun...@open-mpi.org> wrote on
03/06/2012 01:52:06 PM:
> De : Timothy Stitt mailto:timothy.stit...@nd.edu>>
> A : "us...@open-mpi.org<mailto:us...@open-mpi.org>"
> mailto:us...@open-mpi.org>>
> Date : 03/06/2012 01:52 PM
> Objet : [OMP
Hi all,
I am scratching my head over what I think should be a relatively simple group
communicator operation. I am hoping some kind person can put me out of my
misery and figure out what I'm doing wrong.
Basically, I am trying to scatter a set of values to a subset of process ranks
(hence the
Hi all,
We have a code built with OpenMPI (v1.4.3) and the Intel v12.0 compiler that
has been tested successfully on 10s - 100s of cores on our cluster. We recently
ran the same code with 1020 cores and received the following runtime error:
> [d6cneh042:28543] *** Process received signal ***
>
11 matches
Mail list logo