Hello, I apologize in advance if my question is naive, but I started to use open-mpi only one week ago. I have a complicated fortran 90 code which is giving me a segmentation fault (address not mapped). I tracked down the problem to the following lines:
call MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD call MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr) the MPI_send is executed by a process (say 1) which sends the array toroot to another process (say 0). Process 0 successfully receives the array toroot (I print out its components and they are correct), does some calculations on it and sends back an array tonode to process 1. Nevertheless, the MPI_Send routine above never returns controls to process 1 (although the array toroot seems to have been transmitted alright) and gives a segmentation fault (Signal code: Address not mapped (1)) Now, if replace the two lines above with call MPI_sendrecv(toroot,3,MPI_DOUBLE_PRECISION,root,n,tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr) I get no errors and the code works perfectly (I tested it vs the serial version from which I started). But, and here is my question, shouldn't MPI_sendrecv be equivalent to MPI_Send followed by MPI_RECV? thank you in advance for helping with this cheers enrico