[OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?

2008-09-13 Thread Enrico Barausse
Hello,

I apologize in advance if my question is naive, but I started to use
open-mpi only one week ago.
I have a complicated fortran 90 code which is giving me a segmentation
fault (address not mapped). I tracked down the problem to the
following lines:

 call
MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD
 call
MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)

the MPI_send is executed by a process (say 1) which sends the array
toroot to another process (say 0). Process 0 successfully receives the
array toroot (I print out its components and they are correct), does
some calculations on it and sends back an array tonode to process 1.
Nevertheless, the MPI_Send routine above never returns controls to
process 1 (although the array toroot seems to have been transmitted
alright) and gives a segmentation fault (Signal code: Address not
mapped (1))

Now, if replace the two lines above with

call
MPI_sendrecv(toroot,3,MPI_DOUBLE_PRECISION,root,n,tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)

I get no errors and the code works perfectly (I tested it vs the
serial version from which I started). But, and here is my question,
shouldn't MPI_sendrecv be equivalent to MPI_Send followed by MPI_RECV?

thank you in advance for helping with this

cheers

enrico


Re: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?

2008-09-13 Thread Eric Thibodeau

Enrico Barausse wrote:

Hello,

I apologize in advance if my question is naive, but I started to use
open-mpi only one week ago.
I have a complicated fortran 90 code which is giving me a segmentation
fault (address not mapped). I tracked down the problem to the
following lines:

 call
MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD
 call
MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)
  
Well, for starters, your receive count doesn't match the send count. (4 
Vs 3). Is this a typo?

the MPI_send is executed by a process (say 1) which sends the array
toroot to another process (say 0). Process 0 successfully receives the
array toroot (I print out its components and they are correct), does
some calculations on it and sends back an array tonode to process 1.
Nevertheless, the MPI_Send routine above never returns controls to
process 1 (although the array toroot seems to have been transmitted
alright) and gives a segmentation fault (Signal code: Address not
mapped (1))

Now, if replace the two lines above with

call
MPI_sendrecv(toroot,3,MPI_DOUBLE_PRECISION,root,n,tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)

I get no errors and the code works perfectly (I tested it vs the
serial version from which I started). But, and here is my question,
shouldn't MPI_sendrecv be equivalent to MPI_Send followed by MPI_RECV?

thank you in advance for helping with this

cheers

enrico
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
  




Re: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?

2008-09-13 Thread rahmani

- Original Message -
From: "Enrico Barausse" 
To: us...@open-mpi.org
Sent: Saturday, September 13, 2008 8:50:50 AM (GMT-0500) America/New_York
Subject: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?

Hello,

I apologize in advance if my question is naive, but I started to use
open-mpi only one week ago.
I have a complicated fortran 90 code which is giving me a segmentation
fault (address not mapped). I tracked down the problem to the
following lines:

 call
MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD
 call
MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)

the MPI_send is executed by a process (say 1) which sends the array
toroot to another process (say 0). Process 0 successfully receives the
array toroot (I print out its components and they are correct), does
some calculations on it and sends back an array tonode to process 1.
Nevertheless, the MPI_Send routine above never returns controls to
process 1 (although the array toroot seems to have been transmitted
alright) and gives a segmentation fault (Signal code: Address not
mapped (1))

Now, if replace the two lines above with

call
MPI_sendrecv(toroot,3,MPI_DOUBLE_PRECISION,root,n,tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)

I get no errors and the code works perfectly (I tested it vs the
serial version from which I started). But, and here is my question,
shouldn't MPI_sendrecv be equivalent to MPI_Send followed by MPI_RECV?

thank you in advance for helping with this

cheers

enrico
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Hi,
I think if you use MPI_Isend it work correctly!
test this and write me what happen!


[OMPI users] dumping checkpoint at customized locations

2008-09-13 Thread arun dhakne
Hi,

I have blcr installed and I am able to dump checkpoints in the $HOME
using ompi-checkpoint, i was wondering whether there is some option or
something, so that I would be able to  dump the checkpoints at my
customized location say in /tmp ??

-- 
Thanks and Regards,
Arun