Dear all,

I am doing some tests using MPI_Allgatherv function recently. After these
tests, I found a wield problem of if.

When I wanted to use the MPI_Allgatherv function to gather a large number of
data from some processes(for example, 2GB data per process). If the number
of processes was even number, the function worked well and my network card
can receive *and *send data at the same time at his max speed; But if the
number of processes was odd number, the problem came, I found my network
card can only receive *or *send data at the same time at the max speed.

*My sample enviroments*: Openmpi 1.4.3, Linux 2.6.32

*My source codes*:

int main(int argc, char **argv)
{
    char *psend_buf, *precv_buf;
    int rank;
    int process_cnt;
    int repeat_time;

    int *pele_cnt;
    int *pdipal;

    long buf_size;      // Assume the long keyword is 64-bits width

    buf_size = 2000;    // 2000MB
    repeat_time = 1;

    MPI_Init(&argc, &argv);
    MPI_Datatype mpi_meta;

    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &process_cnt);

    precv_buf = (char*)malloc(buf_size * process_cnt * 1024 * 1024);
    psend_buf = (char*)malloc(buf_size * 1024 * 1024);
    memset(precv_buf, 0, buf_size * process_cnt * 1024 * 1024);
    memset(psend_buf, 1, buf_size * 1024 * 1024);

    // new data type: 1MB per unit
    MPI_Type_contiguous(1024 * 1024, MPI_CHAR, &mpi_meta);
    MPI_Type_commit(&mpi_meta);

    pele_cnt = (int*)malloc(sizeof(int) * process_cnt);
    pdipal = (int*)malloc(sizeof(int) * process_cnt);

    for (int i = 0; i < process_cnt; i++)
    {
        pele_cnt[i] = buf_size;
        pdipal[i] = i * buf_size;
    }

    for (int i = 0; i < repeat_time; i++)
    {
        MPI_Allgatherv(psend_buf, buf_size, mpi_meta, precv_buf, pele_cnt,
pdipal, mpi_meta, \
                MPI_COMM_WORLD);
    }

    printf("rank %d, used time = %.3f\n", rank, totle_time);

    free(psend_buf);
    free(precv_buf);

    free(pele_cnt);
    free(pdipal);

    MPI_Type_free(&mpi_meta);
    MPI_Finalize();
}


I guess the problem is because of the implementation of the algorithm of
MPI_Allgather. Did anybody meet the same problem and have any suggestions
for me? Thanks

Best Regards
Xianjun

Reply via email to