Aurélien Bouteiller wrote:

You can't assume that MPI_Send does buffering.

Yes, but I think this is what Eric meant by misinterpreting Enrico's problem. The communication pattern is to send a message, which is received remotely. There is remote computation, and then data is sent back. No buffering is needed for such a pattern. The code is "apparently" legal. There is apparently something else going on in the "real" code that is not captured in the example Enrico sent.

Further, if I understand correctly, the remote process actually receives the data! If this is true, the example is as simple as:

process 1:
   MPI_Send()     // this call blocks

process 0:
MPI_Recv() // this call actually receives the data sent by MPI_Send!!!

Enrico originally explained that process 0 actually receives the data. So, MPI's internal buffering is presumably not a problem at all! An MPI_Send effectively sends data to a remote process, but simply never returns control to the user program.

Without buffering, you are in a possible deadlock situation. This pathological case is the exact motivation for the existence of MPI_Sendrecv. You can also consider Isend Recv Wait, then the Send will never block, even if the destination is not ready to receive, or MPI_Bsend that will add explicit buffering and therefore return control to you before the message transmission actually begun.

Aurelien


Le 15 sept. 08 à 01:08, Eric Thibodeau a écrit :

Sorry about that, I had misinterpreted your original post as being the pair of send-receive. The example you give below does seem correct indeed, which means you might have to show us the code that doesn't work. Note that I am in no way a Fortran expert, I'm more versed in C. The only hint I'd give a C programmer in this case is "make sure your receiving structures are indeed large enough (ie: you send 3d but eventually receive 4d...did you allocate for 3d or 4d for receiving the converted array...).

Eric

Enrico Barausse wrote:

sorry, I hadn't changed the subject. I'm reposting:

Hi

I think it's correct. what I want to to is to send a 3d array from  the
process 1 to process 0 =root):
call MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD

in some other part of the code process 0 acts on the 3d array and
turns it into a 4d one and sends it back to process 1, which receives
it with

call MPI_RECV(tonode, 4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)

in practice, what I do i basically give by this simple code (which
doesn't give the segmentation fault unfortunately):



      a=(/1,2,3,4,5/)

      call MPI_INIT(ierr)
      call MPI_COMM_RANK(MPI_COMM_WORLD, id, ierr)
      call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr)

      if(numprocs/=2) stop

      if(id==0) then
              do k=1,5
                      a=a+1
call MPI_SEND(a,5,MPI_INTEGER, 1,k,MPI_COMM_WORLD,ierr)
                      call
MPI_RECV(b,4,MPI_INTEGER,1,k,MPI_COMM_WORLD,status,ierr)
              end do
      else
              do k=1,5
                      call
MPI_RECV(a,5,MPI_INTEGER,0,k,MPI_COMM_WORLD,status,ierr)
                      b=a(1:4)
call MPI_SEND(b,4,MPI_INTEGER, 0,k,MPI_COMM_WORLD,ierr)
              end do
      end if
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
* 865 974 6321





_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/user
s




Reply via email to