Hello,
I'm trying to use the neighborhood collective communication capabilities
(MPI_Ineighbor_x) of MPI coupled with the distributed graph constructor
(MPI_Dist_graph_create_adjacent) but I'm encountering a segmentation fault
on a test case.

I have attached a 'working' example where I create a MPI communicator with
a simple distributed graph topology where Rank 0 contains Node 0 that
communicates bi-directionally (receiving from and sending to) with Node 1
located on Rank 1.  I then attempt to send integer messages using the
neighborhood collective MPI_Ineighbor_alltoall.  The program run with the
command 'mpirun -n 2 ./simpleneighborhood' compiled with the latest
OpenMPI  (1.10.2) encounters a segmentation fault during the non-blocking
call.  The same program compiled with MPICH (3.2) runs without any problems
and with the expected results.  To muddy the waters a little more, the same
program compiled with OpenMPI but using the blocking neighborhood
collective, MPI_Neighbor_alltoall, seems to run just fine as well.

I'm not really sure at this point if I'm making a simple mistake in the
construction of my test or if something is more fundamentally wrong.  I
would appreciate any insight into my problem!

Thanks ahead of the time for help and let me know if I can provide anymore
information.

Sincerely,
Jun
#include <mpi.h>
#include <iostream>

int main (int argc, char* argv[]) {
  MPI_Init(nullptr, nullptr);
  //--> Connect graph to my mpi communicator
  bool reorder = false;
  int indegree  = 1;
  int outdegree = 1;
  int sources[indegree];
  int sweights[indegree];
  int destinations[outdegree];
  int dweights[outdegree];
  int my_rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); //get my rank
  if (my_rank == 0) {
    sources[0] = 1;
    sweights[0] = 1;
    destinations[0] = 1;
    dweights[0] = 1;
  }else if (my_rank == 1) {
    sources[0] = 0;
    sweights[0] = 1;
    destinations[0] = 0;
    dweights[0] = 1;
  }

  MPI_Info mpi_info = MPI_INFO_NULL;
  MPI_Info_create(&mpi_info);
  MPI_Comm mpi_comm_with_graph;
  MPI_Dist_graph_create_adjacent(MPI_COMM_WORLD, indegree, sources,
                                 sweights, outdegree,
                                 destinations, dweights,
                                 mpi_info, reorder, &mpi_comm_with_graph);
  MPI_Comm_rank(mpi_comm_with_graph, &my_rank); //get my rank

  //----------------------------------------------------------------------------
  //---> Send and receive messages
  int send_number[1];
  int recv_number[1];

  if (my_rank == 0) {
    send_number[0] = 123;
    recv_number[0] = -1;
  }else if (my_rank == 1) {
    send_number[0] = 456;
    recv_number[0] = -1;
  }

  MPI_Request request_array[1];
  request_array[0] = MPI_REQUEST_NULL;

  int error_code = MPI_Ineighbor_alltoall
    (send_number, 1, MPI::INT,
     recv_number, 1, MPI::INT,
     mpi_comm_with_graph, request_array);

  MPI_Status status[1];
  MPI_Wait(request_array, status);

  MPI_Finalize();

  std::cout << "Rank : " << my_rank << " send of " << send_number[0] << "\n";
  std::cout << "Rank : " << my_rank << " recv of " << recv_number[0] << "\n";

  return 0; //--> End simulation
}// End Main

Reply via email to