Thanks for the report and the test case,
this is a bug and i pushed a commit to master.
for the time being, you can download a patch for v1.10 at
https://github.com/ggouaillardet/ompi-release/commit/4afdab0aa86e5127767c4dfbdb763b4cb641e37a.patch
Cheers,
Gilles
On 3/1/2016 12:17 AM, Jun Kudo wrote:
Hello,
I'm trying to use the neighborhood collective communication
capabilities (MPI_Ineighbor_x) of MPI coupled with the distributed
graph constructor (MPI_Dist_graph_create_adjacent) but I'm
encountering a segmentation fault on a test case.
I have attached a 'working' example where I create a MPI communicator
with a simple distributed graph topology where Rank 0 contains Node 0
that communicates bi-directionally (receiving from and sending
to) with Node 1 located on Rank 1. I then attempt to send integer
messages using the neighborhood collective MPI_Ineighbor_alltoall.
The program run with the command 'mpirun -n 2 ./simpleneighborhood'
compiled with the latest OpenMPI (1.10.2) encounters a segmentation
fault during the non-blocking call. The same program compiled with
MPICH (3.2) runs without any problems and with the expected results.
To muddy the waters a little more, the same program compiled with
OpenMPI but using the blocking neighborhood collective,
MPI_Neighbor_alltoall, seems to run just fine as well.
I'm not really sure at this point if I'm making a simple mistake in
the construction of my test or if something is more fundamentally
wrong. I would appreciate any insight into my problem!
Thanks ahead of the time for help and let me know if I can provide
anymore information.
Sincerely,
Jun
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2016/02/28608.php