On Oct 2, 2010, at 2:54 AM, Gijsbert Wiesenekker wrote:

> On Oct 1, 2010, at 23:24 , Gijsbert Wiesenekker wrote:
> 
>> I have a large array that is shared between two processes. One process 
>> updates array elements randomly, the other process reads array elements 
>> randomly. Most of the time these writes and reads do not overlap.
>> The current version of the code uses Linux shared memory with NSEMS 
>> semaphores. When array element i has to be read or updated semaphore (i % 
>> NSEMS) is used. if NSEMS = 1 the entire array will be locked which leads to 
>> unnecessary waits because reads and writes do not overlap most of the time. 
>> Performance increases as NSEMS increases, and flattens out at NSEMS = 32, at 
>> which point the code runs twice as fast when compared to NSEMS = 1.
>> I want to change the code to use OpenMPI RMA, but MPI_Win_lock locks the 
>> entire array, which is similar to NSEMS = 1. Is there a way to have more 
>> granular locks?
>> 
>> Gijsbert
>> 
> 
> Also, is there an MPI_Win_lock equavalent for IPC_NOWAIT?


No.  Every call to MPI_Win_lock will (eventually) result in a locking of the 
window.  Note, however, that MPI_WIN_LOCK returning does not guarantee the 
remote window has been locked.  It only guarantees that it is now safe to call 
data transfer operations targeting that window.  An implementation could (and 
Open MPI frequently does) return immediately, queue up all data transfers until 
some ACK is received from the target, and then begin data movement operations.  
Confusing, but flexible for the wide variety of platforms MPI must target.

Brian

-- 
  Brian W. Barrett
  Dept. 1423: Scalable System Software
  Sandia National Laboratories



Reply via email to