Diego,

about MPI_Allreduce, you should use MPI_IN_PLACE if you want the same
buffer in send and recv

about the stack, I notice comm is NULL which is a bit surprising...
at first glance, type creation looks good.
that being said, you do not check MPIdata%iErr is MPI_SUCCESS after each
MPI call.
I recommend you first do this, so you can catch the error as soon it
happens, and hopefully understand why it occurs.

Cheers,

Gilles

On Wednesday, September 2, 2015, Diego Avesani <diego.aves...@gmail.com>
wrote:

> Dear all,
>
> I have notice small difference between OPEN-MPI and intel MPI.
> For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same
> variable in send and receiving Buff.
>
> I have written my code in OPEN-MPI, but unfortunately I have to run in on
> a intel-MPI cluster.
> Now I have the following error:
>
> *atal error in MPI_Isend: Invalid communicator, error stack:*
> *MPI_Isend(158): MPI_Isend(buf=0x1dd27b0, count=1, INVALID DATATYPE,
> dest=0, tag=0, comm=0x0, request=0x7fff9d7dd9f0) failed*
>
>
> This is ho I create my type:
>
> *  CALL  MPI_TYPE_VECTOR(1, Ncoeff_MLS, Ncoeff_MLS, MPI_DOUBLE_PRECISION,
> coltype, MPIdata%iErr) *
> *  CALL  MPI_TYPE_COMMIT(coltype, MPIdata%iErr)*
> *  !*
> *  CALL  MPI_TYPE_VECTOR(1, nVar, nVar, coltype, MPI_WENO_TYPE,
> MPIdata%iErr) *
> *  CALL  MPI_TYPE_COMMIT(MPI_WENO_TYPE, MPIdata%iErr)*
>
>
> do you believe that is here the problem?
> Is also this the way how intel MPI create a datatype?
>
> maybe I could also ask to intel MPI users
> What do you think?
>
> Diego
>
>

Reply via email to