I'm not quite sure how collective computation calls work. For example, for
an MPI_REDUCE with MPI_SUM, do all the processes collect values from all the
processes and calculate the sum and put result in recvbuf on root? Sounds
strange.

David

***** Correspondence *****



> From: Jeff Squyres <jsquy...@open-mpi.org>
> Reply-To: Open MPI Users <us...@open-mpi.org>
> Date: Mon, 6 Mar 2006 13:22:23 -0500
> To: Open MPI Users <us...@open-mpi.org>
> Subject: Re: [OMPI users] MPI_IN_PLACE
> 
> Generally, yes.  There are some corner cases where we have to
> allocate additional buffers, but that's the main/easiest benefit to
> describe.  :-)
> 
> 
> On Mar 6, 2006, at 11:21 AM, Xiaoning (David) Yang wrote:
> 
>> Jeff,
>> 
>> Thank you for the reply. In other words, MPI_IN_PLACE only
>> eliminates data
>> movement on root, right?
>> 
>> David
>> 
>> ***** Correspondence *****
>> 
>> 
>> 
>>> From: Jeff Squyres <jsquy...@open-mpi.org>
>>> Reply-To: Open MPI Users <us...@open-mpi.org>
>>> Date: Fri, 3 Mar 2006 19:18:52 -0500
>>> To: Open MPI Users <us...@open-mpi.org>
>>> Subject: Re: [OMPI users] MPI_IN_PLACE
>>> 
>>> On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:
>>> 
>>>>       call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
>>>>      &                  MPI_COMM_WORLD,ierr)
>>>> 
>>>> Can I use MPI_IN_PLACE in the MPI_REDUCE call? If I can, how?
>>>> Thanks for any help!
>>> 
>>> MPI_IN_PLACE is an MPI-2 construct, and is defined in the MPI-2
>>> standard.  Its use in MPI_REDUCE is defined in section 7.3.3:
>>> 
>>> http://www.mpi-forum.org/docs/mpi-20-html/node150.htm#Node150
>>> 
>>> It says:
>>> 
>>> "The ``in place'' option for intracommunicators is specified by
>>> passing the value MPI_IN_PLACE to the argument sendbuf at the root.
>>> In such case, the input data is taken at the root from the receive
>>> buffer, where it will be replaced by the output data."
>>> 
>>> In the simple pi example program, it doesn't make much sense to use
>>> MPI_IN_PLACE except as an example to see how it is used (i.e., it
>>> won't gain much in terms of efficiency because you're only dealing
>>> with a single MPI_DOUBLE_PRECISION).  But you would want to put an
>>> "if" statement around the call to MPI_REDUCE and pass MPI_IN_PLACE as
>>> the first argument, and mypi as the second argument for the root.
>>> For all other processes, use the same MPI_REDUCE that you're using
>>> now.
>>> 
>>> -- 
>>> {+} Jeff Squyres
>>> {+} The Open MPI Project
>>> {+} http://www.open-mpi.org/
>>> 
>>> 
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> {+} Jeff Squyres
> {+} The Open MPI Project
> {+} http://www.open-mpi.org/
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to