should the send buffer for MPI_Allgatherv() be data instead of &data ?

BTW, is this issue specific to Open MPI ?
If this is a general MPI issue, forums such as https://stackoverflow.com are a better place for this.

Cheers,

Gilles

On 11/30/2017 5:02 PM, Konstantinos Konstantinidis wrote:
Hi, I will use a small part of C++ code to demonstrate my problem during shuffling. Assume that each slave has to shuffle some unsigned char array defined as *unsigned char* data *within some intracommunicator.

*unsigned lineSize = 100;*
*unsigned long long no_keys = 10;*
*int bytes_send_count = (int)no_keys*lineSize; *
*unsigned int commSize = (unsigned)comm.Get_size();*
*int* recv_counts = new int[commSize];*
*int* displs = new int[commSize];*
*
*
*//Shuffle amount of data*
*comm.Allgather(&bytes_send_count, 1, MPI::INT, recv_counts, 1, MPI::INT); *
*
*
*unsigned long long total = 0; *
*for(unsigned int i = 0; i < commSize; i++){*
*//Update the displacements*
*displs[i] = total; *
**
*//...and the total count*
*total += recv_counts[i]; *
*}*
*
*
*unsigned char* recv_buf = new unsigned char[total];
*
*
*
*//Print data to be sent from rank == 1*
*if(rank == 1){*
*for(int l=0; l<bytes_send_count;l++){*
*printf("%d: Data to be sent is %d\n", rank, data[l]);*
*}*
*}*
*
*
*//Shuffle actual data*
*comm.Allgatherv(&data, bytes_send_count, MPI::UNSIGNED_CHAR, recv_buf, recv_counts, displs, MPI::UNSIGNED_CHAR);*
*
*
*//Check the first portion of the received data*
*if(rank == 1){*
*for(int l=0; l<recv_counts[0]; l++){*
*printf("%d: Data received from myself is %d\n", rank, recv_buf[l]);*
*}*
*}*
*
*
My problem is that the printf() that checks what is about to be sent from node 1 and what is actually received from node 1 by itself print different things that don't match. Based on my study of Allgatherv() I think that the sizes of the received blocks and the displacements are computed correctly. I don't think I need MPI_IN_PLACE since the input and output buffers are supposed to be different.

Can you help me identify the problem?

I am using Open MPI 2.1.2 and testing on a single computer with 7 MPI processes. The ompi_info is the attached file.


_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to