I agree that in practice the best practice would be to use Win_shared_query. 

Still I am confused by this part in the documentation: 
"The allocated memory is contiguous across process ranks unless the info key 
alloc_shared_noncontig is specified. Contiguous across process ranks means that 
the first address in the memory segment of process i is consecutive with the 
last address in the memory segment of process i - 1. This may enable the user 
to calculate remote address offsets with local information only." 

Isn't this an encouragement to use the pointer of Win_allocate_shared directly? 

Peter 

----- Original Message -----

> I don't know about bulletproof, but Win_shared_query is the *only* valid way
> to get the addresses of memory in other processes associated with a window.

> The default for Win_allocate_shared is contiguous memory, but it can and
> likely will be mapped differently into each process, in which case only
> relative offsets are transferrable.

> Jeff

> On Wed, Feb 10, 2016 at 4:19 AM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com > wrote:

> > Peter,
> 

> > The bulletproof way is to use MPI_Win_shared_query after
> > MPI_Win_allocate_shared.
> 
> > I do not know if current behavior is a bug or a feature...
> 

> > Cheers,
> 

> > Gilles
> 

> > On Wednesday, February 10, 2016, Peter Wind < peter.w...@met.no > wrote:
> 

> > > Hi,
> > 
> 

> > > Under fortran, MPI_Win_allocate_shared is called with a window size of
> > > zero
> > > for some processes.
> > 
> 
> > > The output pointer is then not valid for these processes (null pointer).
> > 
> 
> > > Did I understood this wrongly? shouldn't the pointers be contiguous, so
> > > that
> > > for a zero sized window, the pointer should point to the start of the
> > > segment of the next rank?
> > 
> 
> > > The documentation explicitly specifies "size = 0 is valid".
> > 
> 

> > > Attached a small code, where rank=0 allocate a window of size zero. All
> > > the
> > > other ranks get valid pointers, except rank 0.
> > 
> 

> > > Best regards,
> > 
> 
> > > Peter
> > 
> 
> > > _______________________________________________
> > 
> 
> > > users mailing list
> > 
> 
> > > us...@open-mpi.org
> > 
> 
> > > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > 
> 
> > > Link to this post:
> > > http://www.open-mpi.org/community/lists/users/2016/02/28485.php
> > 
> 

> > _______________________________________________
> 
> > users mailing list
> 
> > us...@open-mpi.org
> 
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> > Link to this post:
> > http://www.open-mpi.org/community/lists/users/2016/02/28493.php
> 

> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/

> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28496.php

Reply via email to