Hi,

let me see that it is still not clear to me why you want to reimplement the MPI_Gather supplied by an MPI implementation with your own version. You will never be able to attain the same level of performance using point to point communication, since MPI_Gather uses internally a binomial tree or even better advanced data structures to improve its overall performances.

From your code snippet, I guess the problem arises because you are not considering the extent of the datatype you are trying to receive:

using int MPI_Type_get_extent(MPI_Datatype datatype, MPI_Aint *lb, MPI_Aint *extent );


MPI_Aint lower_bound, extent;
int rc;
rc = MPI_Type_get_extent(recvtype, &lower_bound, &extent);
if(rc == MPI_SUCCESS)
MPI_Irecv(buff + (i-1) * recvcount * extent, recvcount, recvtype, i, 0, comm, &array_request[i-1]);

or if you prefer non pointer notation

MPI_Irecv(buff[(i-1) * recvcount * extent], recvcount, recvtype, i, 0, comm, &array_request[i-1]);


In practice, the extent of a datatype should be equal to the size as reported by sizeof(datatype).
Using MPI_Type_get_extent() is the portable way of doing this using MPI.

Regards,
Massimo




On 31/mar/09, at 12:16, Gabriele Fatigati wrote:

Mm,
OpenMPI functions like MPI_Irecv, does pointer arithmetics over recv
buffer using type info in ompi_datatype_t i suppose. I'm trying to
write a wrapper to MPI_Gather using Irecv functions:

int MPI_FT_Gather(void*sendbuf, int sendcount, MPI_Datatype sendtype,
void*recvbuff,
                         int recvcount, MPI_Datatype recvtype,
                         int root, MPI_Comm comm){

...

for( nprocs..){

MPI_Irecv(&recvbuff[(i-1)], recvcount, recvtype, i, 0, comm,
&array_request[i-1]);

}


}

where every proc sends 1 double. It doesn't work,( received values are
wrong) because MPI_Irecv are trying to do pointer arithmetic with void
values. In fact, if i write:

double*buff  = &recvbuff[0];

for( nprocs..){

MPI_Irecv(&buff[(i-1)], recvcount, recvtype, i, 0, comm, &array_request[i-1]);

}

it works well. Do you have an idea to use a portalbe way to do this?(
if is it possible)



2009/3/30 Massimo Cafaro <massimo.caf...@unisalento.it>:
Dear Gabriele,

to the best of my knowledge the MPI standard does not provide such a
function.
The reason is that when calling MPI_Gather, the standard requires matching type signatures (i.e., the sendcount and sendtype argument on each of the non root processes must match the recvcount and recvtype arguments at the
root process). This still allows having disting type maps (type and
displacement pairs) at a sender process and at the root process, but it is a
feature seldom used in practice, at least in my experience.

Therefore, you must know in advance the datatype you are receiving even in
the case this datatype is a derived datatype.
If not, the likely outcome is that the receive buffer at the root process
gets overwritten, which causes MPI_Gather to return an error.

Due to the signature of the MPI_Gather function, the only possibility I see to achieve what you are trying to do is to use the MPI_BYTE datatype, and use the communicator argument to distinguish between a collective gather in which you receive MPI_INT, MPI_DOUBLE etc. Of course I would not create nor
recommend to create new communicators for this purpose only.

Kind regards,
Massimo

On 30/mar/09, at 17:43, Gabriele Fatigati wrote:

Dear OpenMPI developers,
i'm writing an MPI_Gather wrapper to collect void elements. My
queation is: is there a portable way to know the type of received
elements, like MPI_INT or MPI_DOUBLE? I've noted that i can retrieve
this information by ompi_datatype_t-> name field, but i think isn't
portable. Is there aMPI function that does this check?

--
Ing. Gabriele Fatigati

Parallel programmer

CINECA Systems & Tecnologies Department

Supercomputing Group

Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy

www.cineca.it                    Tel:   +39 051 6171722

g.fatigati [AT] cineca.it
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--

*******************************************************************************************************

Massimo Cafaro, Ph.D. Additional affiliations:
 Assistant Professor                                     National
Nanotechnology Laboratory (NNL/CNR-INFM)
Dept. of Engineering for Innovation Euro-Mediterranean Centre for
Climate Change
 University of Salento, Lecce, Italy            SPACI Consortium
 Via per Monteroni                                        E-mail
massimo.caf...@unisalento.it
 73100 Lecce, Italy
            caf...@ieee.org
 Voice  +39 0832 297371
            caf...@acm.org
 Fax +39 0832 298173
 Web     http://sara.unisalento.it/~cafaro

*******************************************************************************************************



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





--
Ing. Gabriele Fatigati

Parallel programmer

CINECA Systems & Tecnologies Department

Supercomputing Group

Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy

www.cineca.it                    Tel:   +39 051 6171722

g.fatigati [AT] cineca.it

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--

*******************************************************************************************************

Massimo Cafaro, Ph.D. Additional affiliations: Assistant Professor National Nanotechnology Laboratory (NNL/CNR-INFM) Dept. of Engineering for Innovation Euro-Mediterranean Centre for Climate Change
 University of Salento, Lecce, Italy            SPACI Consortium
 Via per Monteroni                                        E-mail 
massimo.caf...@unisalento.it
 73100 Lecce, Italy                                                             
        caf...@ieee.org
 Voice  +39 0832 297371                                                         
        caf...@acm.org
 Fax +39 0832 298173
 Web     http://sara.unisalento.it/~cafaro

*******************************************************************************************************



Reply via email to