The provided code sample is not correct, thus the real issue has nothing to do
with the amount of data to be handled by the MPI implementation. Scale the
amount to allocate down to 2^27 and the issue will still persist…
Your MPI_Allgatherv operation receives recvCount[i]*MPI_INT from each peer a
As your code prints OK without verifying the correctness of the
result, you are only verifying the lack of segfault in OpenMPI, which
is necessary but not sufficient for correct execution.
It is not uncommon for MPI implementations to have issues near
count=2^31. I can't speak to the extent to wh
Dear All,
I write a simple test code to use MPI_Allgatherv function. The problem
comes when
the send buf size becomes relatively big.
When Bufsize = 2^28 – 1, run on 4 processors. OK
When Bufsize = 2^28, run on 4 processors. Error
[btl_tcp_frag.c:209:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: