On May 24, 2012, at 23:18, Dave Goodell <good...@mcs.anl.gov> wrote:

> On May 24, 2012, at 8:13 PM CDT, Jeff Squyres wrote:
> 
>> On May 24, 2012, at 11:57 AM, Lisandro Dalcin wrote:
>> 
>>> The standard says this:
>>> 
>>> "Within each group, all processes provide the same recvcounts
>>> argument, and provide input vectors of  sum_i^n recvcounts[i] elements
>>> stored in the send buffers, where n is the size of the group"
>>> 
>>> So, I read " Within each group, ... where n is the size of the group"
>>> as being the LOCAL group size.
>> 
>> Actually, that seems like a direct contradiction with the prior sentence: 
>> 
>> If comm is an intercommunicator, then the result of the reduction of the 
>> data provided by processes in one group (group A) is scattered among 
>> processes in the other group (group B), and vice versa.
>> 
>> It looks like the implementors of 2 implementations agree that recvcounts 
>> should be the size of the remote group.  Sounds like this needs to be 
>> brought up in front of the Forum...
> 
> So I take back my prior "right".  Upon further inspection of the text and the 
> MPICH2 code I believe it to be true that the number of the elements in the 
> recvcounts array must be equal to the size of the LOCAL group.

This is quite illogical, but it will not be the first time the standard is 
lacking some. So, if I understand you correctly, in the case of an 
intercommunicator a process doesn't know how much data it has to reduce, at 
least not until it receives the array of recvcounts from the remote group. 
Weird!

It makes much more sense to read it the other way. That will remove the need 
for an extra communication, as every rank knows from the beginning everything, 
what they will have to scatter to the remote group, as well as [based on the 
remote recvcounts] what they have to reduce in the local group.

  George.

> The text certainly could use a bit of clarification.  I'll bring it up at the 
> meeting next week.
> 
> -Dave
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to