Hello,
Yes, as I had hinted in the my message, I observed the bug in an irregular
manner.
Glad to see it could be fixed so quickly (it affects 2.0 too). I had observed it
for some time, but only recently took the time to make a proper simplified case
and investigate. Guess I should have submitted
so it seems we took some shortcuts in pml/ob1
the attached patch (for the v1.10 branch) should fix this issue
Cheers
Gilles
On Sat, Nov 5, 2016 at 10:08 PM, Gilles Gouaillardet
wrote:
> that really looks like a bug
>
> if you rewrite your program with
>
> MPI_Sendrecv(&l, 1, MPI_INT, rank
that really looks like a bug
if you rewrite your program with
MPI_Sendrecv(&l, 1, MPI_INT, rank_next, tag, &l_prev, 1, MPI_INT,
rank_prev, tag, MPI_COMM_WORLD, &status);
or even
MPI_Irecv(&l_prev, 1, MPI_INT, rank_prev, tag, MPI_COMM_WORLD, &req);
MPI_Send(&l, 1, MPI_INT, rank_next, tag,
Hi,
note your printf line is missing.
if you printf l_prev, then the valgrind error occurs in all variants
at first glance, it looks like a false positive, and i will investigate it
Cheers,
Gilles
On Sat, Nov 5, 2016 at 7:59 PM, Yvan Fournier wrote:
> Hello,
>
> I have observed what seems to
Hello,
I have observed what seems to be false positives running under Valgrind when
Open MPI is build with --enable-memchecker
(at least with versions 1.10.4 and 2.0.1).
Attached is a simple test case (extracted from larger code) that sends one int
to rank r+1, and receives from rank r-1
(using