Hello,

Find attached a minimal example - hopefully doing what you intended.

Regards
Christoph

--

Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart

Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer



----- Ursprüngliche Mail -----
Von: "Pradeep Jha" <prad...@ccs.engg.nagoya-u.ac.jp>
An: "Open MPI Users" <us...@open-mpi.org>
Gesendet: Freitag, 10. Januar 2014 10:23:40
Betreff: Re: [OMPI users] Calling a variable from another processor



Thanks for your responses. I am still not able to figure it out. I will further 
simply my problem statement. Can someone please help me with a fortran90 code 
for that. 


1) I have N processors each with an array A of size S 
2) On any random processor (say rank X), I calculate the two integer values, Y 
and Z. (0<=Y<N and 0<Z<=S) 
3) On processor X, I want to get the value of A(Z) on processor Y. 


This operation will happen parallely on each processor. Can anyone please help 
me with this? 







2014/1/9 Jeff Hammond < jeff.scie...@gmail.com > 


One sided is quite simple to understand. It is like file io. You read/write 
(get/put) to a memory object. If you want to make it hard to screw up, use 
passive target bss wrap you calls in lock/unlock so every operation is globally 
visible where it's called. 

I've never deadlocked RMA while p2p is easy to hang for nontrivial patterns 
unless you only do nonblocking plus waitall. 

If one finds MPI too hard to learn, there are both GA/ARMCI and OpenSHMEM 
implementations over MPI-3 already (I wrote both...). 

The bigger issue is that OpenMPI doesn't support MPI-3 RMA, just the MPI-2 RMA 
stuff, and even then, datatypes are broken with RMA. Both ARMCI-MPI3 and OSHMPI 
(OpenSHMEM over MPI-3) require a late-model MPICH-derivative to work, but these 
are readily available on every platform normal people use (BGQ is the only 
system missing, and that will be resolved soon). I've run MPI-3 on my Mac 
(MPICH), clusters (MVAPICH), Cray (CrayMPI), and SGI (MPICH). 

Best, 

Jeff 

Sent from my iPhone 



> On Jan 9, 2014, at 5:39 AM, "Jeff Squyres (jsquyres)" < jsquy...@cisco.com > 
> wrote: 
> 
> MPI one-sided stuff is actually pretty complicated; I wouldn't suggest it for 
> a beginner (I don't even recommend it for many MPI experts ;-) ). 
> 
> Why not look at the MPI_SOURCE in the status that you got back from the 
> MPI_RECV? In fortran, it would look something like (typed off the top of my 
> head; forgive typos): 
> 
> ----- 
> integer, dimension(MPI_STATUS_SIZE) :: status 
> ... 
> call MPI_Recv(buffer, ..., status, ierr) 
> ----- 
> 
> The rank of the sender will be in status(MPI_SOURCE). 
> 
> 
>> On Jan 9, 2014, at 6:29 AM, Christoph Niethammer < nietham...@hlrs.de > 
>> wrote: 
>> 
>> Hello, 
>> 
>> I suggest you have a look onto the MPI one-sided functionality (Section 11 
>> of the MPI Spec 3.0). 
>> Create a window to allow the other processes to access the arrays A directly 
>> via MPI_Get/MPI_Put. 
>> Be aware of synchronization which you have to implement via MPI_Win_fence or 
>> manual locking. 
>> 
>> Regards 
>> Christoph 
>> 
>> -- 
>> 
>> Christoph Niethammer 
>> High Performance Computing Center Stuttgart (HLRS) 
>> Nobelstrasse 19 
>> 70569 Stuttgart 
>> 
>> Tel: ++49(0)711-685-87203 
>> email: nietham...@hlrs.de 
>> http://www.hlrs.de/people/niethammer 
>> 
>> 
>> 
>> ----- Ursprüngliche Mail ----- 
>> Von: "Pradeep Jha" < prad...@ccs.engg.nagoya-u.ac.jp > 
>> An: "Open MPI Users" < us...@open-mpi.org > 
>> Gesendet: Donnerstag, 9. Januar 2014 12:10:51 
>> Betreff: [OMPI users] Calling a variable from another processor 
>> 
>> 
>> 
>> 
>> 
>> I am writing a parallel program in Fortran77. I have the following problem: 
>> 1) I have N number of processors. 
>> 2) Each processor contains an array A of size S. 
>> 3) Using some function, on every processor (say rank X), I calculate the 
>> value of two integers Y and Z, where Z<S. (the values of Y and Z are 
>> different on every processor) 
>> 4) I want to get the value of A(Z) on processor Y to processor X. 
>> 
>> I thought of first sending the numerical value X to processor Y from 
>> processor X and then sending A(Z) from processor Y to processor X. But it is 
>> not possible as processor Y does not know the numerical value X and so it 
>> won't know from which processor to receive the numerical value X from. 
>> 
>> I tried but I haven't been able to come up with any code which can implement 
>> this action. So I am not posting any codes. 
>> 
>> Any suggestions? 
>> 
>> _______________________________________________ 
>> users mailing list 
>> us...@open-mpi.org 
>> http://www.open-mpi.org/mailman/listinfo.cgi/users 
>> _______________________________________________ 
>> users mailing list 
>> us...@open-mpi.org 
>> http://www.open-mpi.org/mailman/listinfo.cgi/users 
> 
> 
> -- 
> Jeff Squyres 
> jsquy...@cisco.com 
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/ 
> 
> _______________________________________________ 
> users mailing list 
> us...@open-mpi.org 
> http://www.open-mpi.org/mailman/listinfo.cgi/users 
_______________________________________________ 
users mailing list 
us...@open-mpi.org 
http://www.open-mpi.org/mailman/listinfo.cgi/users 

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
program var_access
!use mpi

implicit none
include 'mpif.h'

integer ierr
integer i
integer rank
integer disp_int
integer X, Y, Z
integer S
parameter (S = 10)
integer A(S)
integer AYS
integer win, NP
integer (kind=MPI_ADDRESS_KIND) lowerbound, size, realextent, disp_aint
integer n
integer, allocatable :: seed(:)
real rnd(3)

call MPI_Init(ierr)
call MPI_Comm_size(MPI_COMM_WORLD,NP,ierr)
call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr)

! Produce random numbers
call random_seed(size = n)
allocate(seed(n))
do i=1, n
    seed(i) = i
end do
call random_seed(put = seed)
call random_number(rnd)
X = floor(NP*rnd(1))
Y = floor(NP*rnd(2))
Z = ceiling(S*rnd(3))

! Determine the size of one data element in the array in bytes
call MPI_Type_get_extent(MPI_INT, lowerbound, realextent, ierr)
disp_int = realextent
! Determine the size of the entire data array in bytes
size = S * realextent
! create the actual memory window
call MPI_Win_create(A, size, disp_int ,MPI_INFO_NULL, MPI_COMM_WORLD, win, ierr)

! Fill array A with some data
do i = 1, S
    A(i) = S * rank + i
    write (*,*) rank, i, A(i)
end do


! Synchronize window
call MPI_Win_fence(0, win, ierr)
if(rank .eq. X) then
    disp_aint = Z - 1
    call MPI_Get(AYS, 1, MPI_INT, Y, disp_aint, 1, MPI_INT, win, ierr)
endif

! Synchronize window, completing all accesses to it
call MPI_Win_fence(0, win, ierr)
if(rank .eq. X) then
    write (*,*) Y,Z,"# ", AYS
endif

call MPI_Win_free(win, ierr)
call MPI_Finalize(ierr)

end program var_access

Reply via email to