[OMPI users] Nonblocking neighborhood collectives with distributed graph creation

2016-02-29 Thread Jun Kudo
Hello,
I'm trying to use the neighborhood collective communication capabilities
(MPI_Ineighbor_x) of MPI coupled with the distributed graph constructor
(MPI_Dist_graph_create_adjacent) but I'm encountering a segmentation fault
on a test case.

I have attached a 'working' example where I create a MPI communicator with
a simple distributed graph topology where Rank 0 contains Node 0 that
communicates bi-directionally (receiving from and sending to) with Node 1
located on Rank 1.  I then attempt to send integer messages using the
neighborhood collective MPI_Ineighbor_alltoall.  The program run with the
command 'mpirun -n 2 ./simpleneighborhood' compiled with the latest
OpenMPI  (1.10.2) encounters a segmentation fault during the non-blocking
call.  The same program compiled with MPICH (3.2) runs without any problems
and with the expected results.  To muddy the waters a little more, the same
program compiled with OpenMPI but using the blocking neighborhood
collective, MPI_Neighbor_alltoall, seems to run just fine as well.

I'm not really sure at this point if I'm making a simple mistake in the
construction of my test or if something is more fundamentally wrong.  I
would appreciate any insight into my problem!

Thanks ahead of the time for help and let me know if I can provide anymore
information.

Sincerely,
Jun
#include 
#include 

int main (int argc, char* argv[]) {
  MPI_Init(nullptr, nullptr);
  //--> Connect graph to my mpi communicator
  bool reorder = false;
  int indegree  = 1;
  int outdegree = 1;
  int sources[indegree];
  int sweights[indegree];
  int destinations[outdegree];
  int dweights[outdegree];
  int my_rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); //get my rank
  if (my_rank == 0) {
sources[0] = 1;
sweights[0] = 1;
destinations[0] = 1;
dweights[0] = 1;
  }else if (my_rank == 1) {
sources[0] = 0;
sweights[0] = 1;
destinations[0] = 0;
dweights[0] = 1;
  }

  MPI_Info mpi_info = MPI_INFO_NULL;
  MPI_Info_create(&mpi_info);
  MPI_Comm mpi_comm_with_graph;
  MPI_Dist_graph_create_adjacent(MPI_COMM_WORLD, indegree, sources,
 sweights, outdegree,
 destinations, dweights,
 mpi_info, reorder, &mpi_comm_with_graph);
  MPI_Comm_rank(mpi_comm_with_graph, &my_rank); //get my rank

  //
  //---> Send and receive messages
  int send_number[1];
  int recv_number[1];

  if (my_rank == 0) {
send_number[0] = 123;
recv_number[0] = -1;
  }else if (my_rank == 1) {
send_number[0] = 456;
recv_number[0] = -1;
  }

  MPI_Request request_array[1];
  request_array[0] = MPI_REQUEST_NULL;

  int error_code = MPI_Ineighbor_alltoall
(send_number, 1, MPI::INT,
 recv_number, 1, MPI::INT,
 mpi_comm_with_graph, request_array);

  MPI_Status status[1];
  MPI_Wait(request_array, status);

  MPI_Finalize();

  std::cout << "Rank : " << my_rank << " send of " << send_number[0] << "\n";
  std::cout << "Rank : " << my_rank << " recv of " << recv_number[0] << "\n";

  return 0; //--> End simulation
}// End Main


[OMPI users] General Questions

2016-02-29 Thread Matthew Larkin
Hello all,
First time on here. Two questions.
1. I know OpenMPI supports ethernet, but where does it clearly state that?- I 
see on the FAQ on the web page there is a whole list of network interconnect, 
but how do I relate that to Ethernet network etc.?
2. Does OpenMPI work with PCIe and PCIe switch?- Is there any specific 
configuration to get it to work?
Thanks!

Re: [OMPI users] Nonblocking neighborhood collectives with distributed graph creation

2016-02-29 Thread Gilles Gouaillardet

Thanks for the report and the test case,

this is a bug and i pushed a commit to master.
for the time being, you can download a patch for v1.10 at 
https://github.com/ggouaillardet/ompi-release/commit/4afdab0aa86e5127767c4dfbdb763b4cb641e37a.patch


Cheers,

Gilles

On 3/1/2016 12:17 AM, Jun Kudo wrote:

Hello,
I'm trying to use the neighborhood collective communication 
capabilities (MPI_Ineighbor_x) of MPI coupled with the distributed 
graph constructor (MPI_Dist_graph_create_adjacent) but I'm 
encountering a segmentation fault on a test case.


I have attached a 'working' example where I create a MPI communicator 
with a simple distributed graph topology where Rank 0 contains Node 0 
that communicates bi-directionally (receiving from and sending 
to) with Node 1 located on Rank 1.  I then attempt to send integer 
messages using the neighborhood collective MPI_Ineighbor_alltoall.  
The program run with the command 'mpirun -n 2 ./simpleneighborhood' 
compiled with the latest OpenMPI  (1.10.2) encounters a segmentation 
fault during the non-blocking call.  The same program compiled with 
MPICH (3.2) runs without any problems and with the expected results.  
To muddy the waters a little more, the same program compiled with 
OpenMPI but using the blocking neighborhood collective, 
MPI_Neighbor_alltoall, seems to run just fine as well.


I'm not really sure at this point if I'm making a simple mistake in 
the construction of my test or if something is more fundamentally 
wrong.  I would appreciate any insight into my problem!


Thanks ahead of the time for help and let me know if I can provide 
anymore information.


Sincerely,
Jun


___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/02/28608.php