You must have an error elsewhere in your code; as Gilles pointed, the error 
message states that you are calling MPI_WAITALL with a first argument of 3:

------
MPI_Waitall(271): MPI_Waitall(count=3, req_array=0x7445f0, 
status_array=0x744600) failed
------

We can't really help you with problems with Intel MPI; sorry.  You'll need to 
contact their tech support for assistance.



> On Jan 29, 2016, at 6:11 AM, Diego Avesani <diego.aves...@gmail.com> wrote:
> 
> Dear all, Dear Gilles,
> 
> I do not understand, I am sorry. 
> I did a "grep" on my code and I find only "MPI_WAITALL(2", so I am not able 
> to find the error.
> 
> 
> Thanks a lot
> 
> 
> 
> Diego
> 
> 
> On 29 January 2016 at 11:58, Gilles Gouaillardet 
> <gilles.gouaillar...@gmail.com> wrote:
> Diego, 
> 
> your code snippet does MPI_Waitall(2,...)
> but the error is about MPI_Waitall(3,...)
> 
> Cheers,
> 
> Gilles
> 
> 
> On Friday, January 29, 2016, Diego Avesani <diego.aves...@gmail.com> wrote:
> Dear all, 
> 
> I have created a program in fortran and OpenMPI, I test it on my laptop and 
> it works.
> I would like to use it on a cluster that has, unfortunately, intel MPI.
> 
> The program crushes on the cluster and I get the following error:
> 
> Fatal error in MPI_Waitall: Invalid MPI_Request, error stack:
> MPI_Waitall(271): MPI_Waitall(count=3, req_array=0x7445f0, 
> status_array=0x744600) failed
> MPI_Waitall(119): The supplied request in array element 2 was invalid (kind=0)
> 
> Do OpenMPI and MPI have some difference that I do not know?
> 
> this is my code
> 
>  REQUEST = MPI_REQUEST_NULL
>  !send data share with left
>  IF(MPIdata%rank.NE.0)THEN
>     MsgLength = MPIdata%imaxN
>     DO icount=1,MPIdata%imaxN
>             iNode = MPIdata%nodeFromUp(icount)
>             send_messageL(icount) = R1(iNode)
>     ENDDO
>     CALL MPI_ISEND(send_messageL, MsgLength, MPIdata%AUTO_COMP, 
> MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD, REQUEST(1), MPIdata%iErr)
>  ENDIF
>  !
>  !recive message FROM RIGHT CPU
>  IF(MPIdata%rank.NE.MPIdata%nCPU-1)THEN
>     MsgLength = MPIdata%imaxN
>     CALL MPI_IRECV(recv_messageR, MsgLength, MPIdata%AUTO_COMP, 
> MPIdata%rank+1, MPIdata%rank+1, MPI_COMM_WORLD, REQUEST(2), MPIdata%iErr)
>  ENDIF
>  CALL MPI_WAITALL(2,REQUEST,send_status_list,MPIdata%iErr)
>  IF(MPIdata%rank.NE.MPIdata%nCPU-1)THEN
>     DO i=1,MPIdata%imaxN
>        iNode=MPIdata%nodeList2Up(i)
>        R1(iNode)=recv_messageR(i)
>     ENDDO
>  ENDIF
> 
> Thank a lot your help
> 
> 
> 
> Diego
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/01/28411.php
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/01/28413.php


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to