Dear all,

I am really sorry for the time that you dedicated to me.

this is what I found:

 REQUEST = MPI_REQUEST_NULL
 !send data share with UP
 IF(MPIdata%rank.NE.0)THEN
    MsgLength = MPIdata%imaxN
    DO icount=1,MPIdata%imaxN
            iNode = MPIdata%nodeFromUp(icount)
            send_messageL(icount) = R1(iNode)
    ENDDO
    CALL MPI_ISEND(send_messageL, MsgLength, MPIdata%AUTO_COMP,
MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD, REQUEST(1), MPIdata%iErr)
 ENDIF
 !
 !recive message FROM up CPU
 IF(MPIdata%rank.NE.MPIdata%nCPU-1)THEN
    MsgLength = MPIdata%imaxN
    CALL MPI_IRECV(recv_messageR, MsgLength, MPIdata%AUTO_COMP,
MPIdata%rank+1, MPIdata%rank+1, MPI_COMM_WORLD, REQUEST(2), MPIdata%iErr)
 ENDIF
 CALL MPI_WAITALL(nMsg,REQUEST,send_status_list,MPIdata%iErr)
 IF(MPIdata%rank.NE.MPIdata%nCPU-1)THEN
    DO i=1,MPIdata%imaxN
       iNode=MPIdata%nodeList2Up(i)
       R1(iNode)=recv_messageR(i)
    ENDDO
 ENDIF

As you can see there is a nMsg which is set equal to "3". Do I have to set
it equal to? Am I right?





Diego


On 29 January 2016 at 14:09, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Diego,
>
> First, you can double check the program you are running has been compiled
> from your sources.
>
> then you can run your program under a debugger, and browse the stack when
> it crashes.
>
> there could be a bug in intelmpi, that incorrectly translates 2 in Fortran
> to 3 in C...
> but as far as I am concerned, this is extremely unlikely.
>
> Cheers,
>
> Gilles
>
> On Friday, January 29, 2016, Diego Avesani <diego.aves...@gmail.com>
> wrote:
>
>> Dear all, Dear Jeff, Dear Gilles,
>>
>> I am sorry, porblably I am a stubborn.
>>
>> In all my code I have
>>
>> CALL MPI_WAITALL(2,REQUEST,send_status_list,MPIdata%iErr)
>>
>> how can it became "3"?
>>
>> the only thing that I can think is that MPI starts to allocate the vector
>> from "0", while fortran starts from 1. Indeed I allocate REQUEST(2)
>>
>> what do you think?
>>
>> Diego
>>
>>
>>
>> Diego
>>
>>
>> On 29 January 2016 at 12:43, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
>> wrote:
>>
>>> You must have an error elsewhere in your code; as Gilles pointed, the
>>> error message states that you are calling MPI_WAITALL with a first argument
>>> of 3:
>>>
>>> ------
>>> MPI_Waitall(271): MPI_Waitall(count=3, req_array=0x7445f0,
>>> status_array=0x744600) failed
>>> ------
>>>
>>> We can't really help you with problems with Intel MPI; sorry.  You'll
>>> need to contact their tech support for assistance.
>>>
>>>
>>>
>>> > On Jan 29, 2016, at 6:11 AM, Diego Avesani <diego.aves...@gmail.com>
>>> wrote:
>>> >
>>> > Dear all, Dear Gilles,
>>> >
>>> > I do not understand, I am sorry.
>>> > I did a "grep" on my code and I find only "MPI_WAITALL(2", so I am not
>>> able to find the error.
>>> >
>>> >
>>> > Thanks a lot
>>> >
>>> >
>>> >
>>> > Diego
>>> >
>>> >
>>> > On 29 January 2016 at 11:58, Gilles Gouaillardet <
>>> gilles.gouaillar...@gmail.com> wrote:
>>> > Diego,
>>> >
>>> > your code snippet does MPI_Waitall(2,...)
>>> > but the error is about MPI_Waitall(3,...)
>>> >
>>> > Cheers,
>>> >
>>> > Gilles
>>> >
>>> >
>>> > On Friday, January 29, 2016, Diego Avesani <diego.aves...@gmail.com>
>>> wrote:
>>> > Dear all,
>>> >
>>> > I have created a program in fortran and OpenMPI, I test it on my
>>> laptop and it works.
>>> > I would like to use it on a cluster that has, unfortunately, intel MPI.
>>> >
>>> > The program crushes on the cluster and I get the following error:
>>> >
>>> > Fatal error in MPI_Waitall: Invalid MPI_Request, error stack:
>>> > MPI_Waitall(271): MPI_Waitall(count=3, req_array=0x7445f0,
>>> status_array=0x744600) failed
>>> > MPI_Waitall(119): The supplied request in array element 2 was invalid
>>> (kind=0)
>>> >
>>> > Do OpenMPI and MPI have some difference that I do not know?
>>> >
>>> > this is my code
>>> >
>>> >  REQUEST = MPI_REQUEST_NULL
>>> >  !send data share with left
>>> >  IF(MPIdata%rank.NE.0)THEN
>>> >     MsgLength = MPIdata%imaxN
>>> >     DO icount=1,MPIdata%imaxN
>>> >             iNode = MPIdata%nodeFromUp(icount)
>>> >             send_messageL(icount) = R1(iNode)
>>> >     ENDDO
>>> >     CALL MPI_ISEND(send_messageL, MsgLength, MPIdata%AUTO_COMP,
>>> MPIdata%rank-1, MPIdata%rank, MPI_COMM_WORLD, REQUEST(1), MPIdata%iErr)
>>> >  ENDIF
>>> >  !
>>> >  !recive message FROM RIGHT CPU
>>> >  IF(MPIdata%rank.NE.MPIdata%nCPU-1)THEN
>>> >     MsgLength = MPIdata%imaxN
>>> >     CALL MPI_IRECV(recv_messageR, MsgLength, MPIdata%AUTO_COMP,
>>> MPIdata%rank+1, MPIdata%rank+1, MPI_COMM_WORLD, REQUEST(2), MPIdata%iErr)
>>> >  ENDIF
>>> >  CALL MPI_WAITALL(2,REQUEST,send_status_list,MPIdata%iErr)
>>> >  IF(MPIdata%rank.NE.MPIdata%nCPU-1)THEN
>>> >     DO i=1,MPIdata%imaxN
>>> >        iNode=MPIdata%nodeList2Up(i)
>>> >        R1(iNode)=recv_messageR(i)
>>> >     ENDDO
>>> >  ENDIF
>>> >
>>> > Thank a lot your help
>>> >
>>> >
>>> >
>>> > Diego
>>> >
>>> >
>>> > _______________________________________________
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> > Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/01/28411.php
>>> >
>>> > _______________________________________________
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> > Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/01/28413.php
>>>
>>>
>>> --
>>> Jeff Squyres
>>> jsquy...@cisco.com
>>> For corporate legal information go to:
>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/01/28414.php
>>>
>>
>>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28418.php
>

Reply via email to