[OMPI users] One-sided datatype errors

2010-12-13 Thread James Dinan
Accumulate Test * * Author: James Dinan * Date : December, 2010 * * This code performs N accumulates into a 2d patch of a shared array. The * array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] * and begins at index [0, 0]. The input and output buffers are specifie

Re: [OMPI users] One-sided datatype errors

2010-12-14 Thread James Dinan
problem on a single node with Open MPI 1.5 and the trunk. I have submitted a ticket with the information. https://svn.open-mpi.org/trac/ompi/ticket/2656 Rolf On 12/13/10 18:44, James Dinan wrote: Hi, I'm getting strange behavior using datatypes in a one-sided MPI_Accumulate operation. The att

Re: [OMPI users] Using MPI_Put/Get correctly?

2010-12-16 Thread James Dinan
On 12/16/2010 08:34 AM, Jeff Squyres wrote: > Additionally, since MPI-3 is updating the semantics of the one-sided > stuff, it might be worth waiting for all those clarifications before > venturing into the MPI one-sided realm. One-sided semantics are much > more subtle and complex than two-sided

Re: [OMPI users] nonblock alternative to MPI_Win_complete

2011-02-24 Thread James Dinan
Hi Toon, Can you use non-blocking send/recv? It sounds like this will give you the completion semantics you want. Best, ~Jim. On 2/24/11 6:07 AM, Toon Knapen wrote: In that case, I have a small question concerning design: Suppose task-based parallellism where one node (master) distributes

Re: [OMPI users] nonblock alternative to MPI_Win_complete

2011-02-24 Thread James Dinan
MPLETE(win) initiate a nonblocking send with tag tag1 to each process in the group of the preceding start call. No need to wait for the completion of these sends." The wording 'nonblocking send' startles me somehow !? toon On Thu, Feb 24, 2011 at 2:05 PM, James Dinan wrote:

Re: [OMPI users] MPI one-sided passive synchronization.

2011-04-13 Thread James Dinan
Sudheer, Locks in MPI don't mean mutexes, they mark the beginning and end of a passive mode communication epoch. All MPI operations within an epoch logically occur concurrently and must be non-conflicting. So, what you're written below is incorrect: the get is not guaranteed to complete unt

Re: [OMPI users] Memory mapped memory

2011-10-17 Thread James Dinan
Sure, this is possible and generally works, although it is not defined by the MPI standard. Regular shared memory rules apply: you may have to add additional memory consistency and/or synchronization calls depending on your platform to ensure that MPI sees intended data updates. Best, ~Jim. On