Hello,

I have a custom datatype MPI_EVENTDATA (created with MPI_Type_create_struct) 
which is a struct with some fixed size fields and a variable sized array of 
ints (data). I want to collect a variable number of these types (Events) from 
all ranks at rank 0. My current version is working for a fixed size custom 
datatype:

void collect()
{
  int rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  
  size_t globalSize = events.size();
  // Get total number of Events that are to be received
  MPI_Allreduce(MPI_IN_PLACE, &globalSize, 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
  std::vector<MPI_Request> requests(globalSize);
  std::vector<MPI_EventData> recvEvents(globalSize);
    
  if (rank == 0) {
    for (size_t i = 0; i < globalSize; i++) {
      MPI_Irecv(&recvEvents[i], 1, MPI_EVENTDATA, MPI_ANY_SOURCE, MPI_ANY_TAG, 
MPI_COMM_WORLD, &requests[i]);
    }
  }
  for (const auto & ev : events) {
    MPI_EventData eventdata;
    assert(ev.first.size() < 255);
    strcpy(eventdata.name, ev.first.c_str());
    eventdata.rank = rank;
    eventdata.dataSize = ev.second.data.size();
    MPI_Send(&eventdata, 1, MPI_EVENTDATA, 0, 0, MPI_COMM_WORLD);      
  }
  if (rank == 0) {
    MPI_Waitall(globalSize, requests.data(), MPI_STATUSES_IGNORE);
    for (const auto & evdata : recvEvents) {
      // Save in a std::multimap with evdata.name as key
      globalEvents.emplace(std::piecewise_construct, 
std::forward_as_tuple(evdata.name),
                           std::forward_as_tuple(evdata.name, evdata.rank));
    }
    
  }
}

Obviously, next step would be to allocate a buffer of size evdata.dataSize, 
receive it, add it to globalEvents multimap<Event> and be happy. Questions I 
have:

* How to correlate the received Events in the first step, with the received 
data vector in the second step?
* Is there a way to use a variable sized compononent inside a custom MPI 
datatype?
* Or dump the custom datatype and use MPI_Pack instead?
* Or somehow group two succeeding messages together?

I'm open to any good and elegant suggestions!

Thanks,
Florian





_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to