I think the issue is with the way you define the send and receive
buffer in the MPI_Alltoall. You have to keep in mind that the
all-to-all pattern will overwrite the entire data in the receive
buffer. Thus, starting from a relative displacement in the data (in
this case matrix[wrank*wrows]), begs for troubles, as you will write
outside the receive buffer.

  George.


On Thu, May 8, 2014 at 10:08 AM, Matthieu Brucher
<matthieu.bruc...@gmail.com> wrote:
> The Alltoall should only return when all data is sent and received on
> the current rank, so there shouldn't be any race condition.
>
> Cheers,
>
> Matthieu
>
> 2014-05-08 15:53 GMT+02:00 Spenser Gilliland <spen...@gillilanding.com>:
>> George & other list members,
>>
>> I think I may have a race condition in this example that is masked by
>> the print_matrix statement.
>>
>> For example, lets say rank one has a large sleep before reaching the
>> local transpose, will the other ranks have completed the Alltoall and
>> when rank one reaches the local transpose it is altering the data that
>> the other processors sent it?
>>
>> Thanks,
>> Spenser
>>
>>
>> --
>> Spenser Gilliland
>> Computer Engineer
>> Doctoral Candidate
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
> Music band: http://liliejay.com/
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to