OK, I started implementing the above Allgather() idea without success
(segmentation fault). So I will post the problematic lines hare:
* comm.Allgather(&(endata.size), 1, MPI::UNSIGNED_LONG_LONG,
&(endata_rcv.size), 1, MPI::UNSIGNED_LONG_LONG);*
* endata_rcv.data = new unsigned char[endata_rcv.siz
If each process send a different amount of data, then the operation should
be an allgatherv. This also requires that you know the amount each process
will send, so you will need a allgather. Schematically the code should look
like the following:
long bytes_send_count = endata.size * sizeof(long);
Hello,
In debugging a test of an application, I recently came across odd behavior
for simultaneous MPI_Abort calls. Namely, while the MPI_Abort was
acknowledged by the process output, the mpirun process failed to exit. I
was able to duplicate this behavior on multiple machines with OpenMPI
version
OK, I will try to explain a few more things about the shuffling and I have
attached only specific excerpts of the code to avoid confusion. I have
added many comments.
First, let me note that this project is an implementation of the Terasort
benchmark with a master node which assigns jobs to the sl
Hi,
On Tue, Nov 07, 2017 at 02:05:20PM -0700, Nikolas Antolin wrote:
> Hello,
>
> In debugging a test of an application, I recently came across odd behavior
> for simultaneous MPI_Abort calls. Namely, while the MPI_Abort was
> acknowledged by the process output, the mpirun process failed to exit.
On Tue, Nov 7, 2017 at 6:09 PM, Konstantinos Konstantinidis <
kostas1...@gmail.com> wrote:
> OK, I will try to explain a few more things about the shuffling and I have
> attached only specific excerpts of the code to avoid confusion. I have
> added many comments.
>
> First, let me note that this p