You can be right semantically. But also the sentence "the first address in the memory segment of process i is consecutive with the last address in the memory segment of process i - 1" is not easy to interpret correctly for a zero size segment.
There may be good reasons not to allocate the pointer for zero size segment. What I try to say is, that a new user reading the documentation, will not expect this behaviour before trying it out. Couldn't a small sentence in the documentation, like "the pointer should not be used for zero size segments" clarify this? Peter ----- Original Message ----- > > On Thu, Feb 11, 2016 at 02:17:40PM +0000, Peter Wind wrote: > > I would add that the present situation is bound to give problems for > > some > > users. > > It is natural to divide an array in segments, each process treating its > > own segment, but needing to read adjacent segments too. > > MPI_Win_allocate_shared seems to be designed for this. > > This will work fine as long as no segment as size zero. It can also be > > expected that most testing would be done with all segments larger than > > zero. > > The document adding "size = 0 is valid", would also make people > > confident > > that it will be consistent for that special case too. > > Nope, that statement says its ok for a rank to specify that the local > shared memory segment is 0 bytes. Nothing more. The standard > unfortunately does not define what pointer value is returned for a rank > that specifies size = 0. Not sure if the RMA working group intentionally > left that undefine... Anyway, Open MPI does not appear to be out of > compliance with the standard here. > > To be safe you should use MPI_Win_shared_query as suggested. You can > pass MPI_PROC_NULL as the rank to get the pointer for the first non-zero > sized segment in the shared memory window. > > -Nathan > HPC-5, LANL > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/02/28506.php