Jun,
a patch is available at
https://github.com/ggouaillardet/ompi-release/commit/f277beace9fbe8dd71f733602b5d4b0344d77a29.patch
this is not a bulletproof one, but it does fix your problem.
in this case, MPI_Ineighbor_alltoallw is invoked with sendbuf ==
recvbuf, and internally,
libnbc consid
Giles,
Thanks for the small bug fix. It helped clear up that test case but I'm
again running into another segmentation fault on a more complicated problem.
I've attached another 'working' example. This time I am using the
MPI_Ineighbor_alltoallw on a triangular topology; node 0 communicates
bi-
Thanks for the report and the test case,
this is a bug and i pushed a commit to master.
for the time being, you can download a patch for v1.10 at
https://github.com/ggouaillardet/ompi-release/commit/4afdab0aa86e5127767c4dfbdb763b4cb641e37a.patch
Cheers,
Gilles
On 3/1/2016 12:17 AM, Jun Kudo
Hello,
I'm trying to use the neighborhood collective communication capabilities
(MPI_Ineighbor_x) of MPI coupled with the distributed graph constructor
(MPI_Dist_graph_create_adjacent) but I'm encountering a segmentation fault
on a test case.
I have attached a 'working' example where I create a MP