Not "MPI_Send from 0"..
MPI_Send from 1 to 0
MPI_Send from 7 to 0
And so on..

On Wed, Mar 27, 2019, 8:43 AM carlos aguni <aguni...@gmail.com> wrote:

> Hi all.
>
> I've an MPI application in which at one moment one rank receives a slice
> of an array from the other nodes.
> Thing is that my application hangs there.
>
> One thing I could get from printint out logs are:
> (Rank 0) Starts MPI_Recv from source 4
> But then it receives:
> MPI_Send from 0
> MPI_Send from 1
> ... From 10
> ... From 7
> ... From 6
>
> Then at one neither of them are responding.
> The message is a double array type of size 100.000.
> Later it would receive the message from 4.
>
> So i assume the buffer on the Recv side is overflowing.
>
> Few tests:
> - Using smaller array size works
> - alreay tried using isend. Irecv. Bsend. And the ranks still get stuck.
>
> So that leaves me to a few questions rather than how to solve this issue:
> - how can i know the size of mpi's interbal buffer?
> - how would one debug this?
>
> Regards,
> Carlos.
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to