Hi, Priyesh,

The output of your program is pretty much what one would expect. 
140736841025492 is 0x7FFFD96A87D4 which pretty much corresponds to a location 
in the stack, which is to be expected as a and b are scalar variables and most 
likely end up on the stack. As c is array its location is compiler-dependent. 
Some compilers put small arrays on the stack while others make them global or 
allocate them on the heap. In your case 0x6ABAD0 could either be somewhere in 
the BSS (where uninitialised global variables reside) or in the heap, which 
starts right after BSS (I would say it is the BSS). If the array is placed in 
BSS its location is fixed with respect to the image base.

Linux by default implements partial Address Space Layout Randomisation (ASLR) 
by placing the program stack at slightly different location with each run (this 
is to make remote stack based exploits harder). That's why you see different 
addresses for variables on the stack. But things in BSS would pretty much have 
the same addresses when the code is executed multiple times or on different 
machines having the same architecture and similar OS with similar settings 
since executable images are still loaded at the same base virtual address.

Having different addresses is not an issue for MPI as it only operates with 
pointers which are local to the process as well as with relative offsets. You 
pass the MPI_Send or MPI_Recv function the address of the data buffer in the 
current process and it has nothing to do with where those buffers are located 
in the other processes. Note also that MPI supports heterogeneous computing, 
e.g. the sending process might be 32-bit and the receiving one 64-bit. In this 
scenario it is quite probable that the addresses will differ by very large 
margin (e.g. the stack address of your 64-bit output is not even valid on 
32-bit system).

Hope that helps more :)

Kind regards,
Hristo

On 24.07.2012, at 02:02, Priyesh Srivastava wrote:

> hello  Hristo 
> 
> Thank you for your reply. I was able to understand some parts of your 
> response, but still had some doubts due to my lack of knowledge about the way 
> memory is allocated.
> 
> I have created a small sample program and the resulting output which will 
> help me  pin point my question.
> The program is : 
> 
>        
> program test
>       include'mpif.h'
>       
>       integer a,b,c(10),ierr,id,datatype,size(3),type(3),i,status
>       
>       integer(kind=MPI_ADDRESS_KIND) add(3)
> 
> 
>       call MPI_INIT(ierr)
>       call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr)
>       call MPI_GET_ADDRESS(a,add(1),ierr)
>       write(*,*) 'address of a ,id ', add(1), id
>       call MPI_GET_ADDRESS(b,add(2),ierr)
>       write(*,*) 'address of b,id ', add(2), id 
>       call MPI_GET_ADDRESS(c,add(3),ierr)
>       write(*,*) 'address of c,id ', add(3), id
> 
>       add(3)=add(3)-add(1)
>       add(2)=add(2)-add(1)
>       add(1)=add(1)-add(1)
>       
>       size(1)=1
>       size(2)=1
>       size(3)=10
>       type(1)=MPI_INTEGER
>       type(2)=MPI_INTEGER
>       type(3)=MPI_INTEGER
>       call MPI_TYPE_CREATE_STRUCT(3,size,add,type,datatype,ierr)
>       call MPI_TYPE_COMMIT(datatype,ierr)
>       
>       write(*,*) 'datatype ,id', datatype , id
>       write(*,*) ' relative add1 ',add(1), 'id',id
>       write(*,*) ' relative add2 ',add(2), 'id',id
>       write(*,*) ' relative add3 ',add(3), 'id',id
>       if(id==0) then
>       a = 1000
>       b=2000
>       do i=1,10
>       c(i)=i
>       end do
>       c(10)=700
>       c(1)=600
>       end if
> 
> 
>         if(id==0) then
>       call MPI_SEND(a,1,datatype,1,8,MPI_COMM_WORLD,ierr)
>       end if
> 
>       if(id==1) then
>       call MPI_RECV(a,1,datatype,0,8,MPI_COMM_WORLD,status,ierr)
>       write(*,*) 'id =',id
>       write(*,*) 'a=' , a
>       write(*,*) 'b=' , b
>       do i=1,10
>       write(*,*) 'c(',i,')=',c(i)
>       end do
>       end if
>       
>       call MPI_FINALIZE(ierr)
>       end
>    
> 
>  
> the output is :
> 
> 
>  address of a ,id        140736841025492           0
>  address of b,id        140736841025496            0
>  address of c,id                        6994640            0
>  datatype ,id                                         58           0
>   relative add1                                      0   id      0
>   relative add2                                      4   id      0
>   relative add3         -140736834030852   id      0
>  address of a ,id        140736078234324           1
>  address of b,id         140736078234328           1
>  address of c,id                         6994640           1
>  datatype ,id                                         58           1
>   relative add1                                     0  id        1
>   relative add2                                     4 id         1
>   relative add3       -140736071239684 id          1
>  id =           1
>  a=        1000
>  b=        2000
>  c( 1 )=         600
>  c( 2 )=           2
>  c( 3 )=           3
>  c( 4 )=           4
>  c(5 )=            5
>  c( 6 )=           6
>  c( 7 )=           7
>  c( 8 )=           8
>  c(9 )=            9
>  c(10 )=         700
> 
> 
> 
> As I had mentioned that the smaller address(of array c) is same for both the 
> processors. However the larger ones(of 'a' and 'b' ) are different. This gets 
> explained by what you had mentioned.
> 
> So the relative address of the array 'c ' with respect to 'a' is  different 
> for both the processors . The way I am passing data should not 
> work(specifically the passing of array 'c') but still everything is correctly 
> sent from processor 0 to 1. I have noticed that  this way of sending non 
> contiguous data is common but I am confused why this works.
> 
> thanks
> priyesh
> On Mon, Jul 23, 2012 at 12:00 PM, <users-requ...@open-mpi.org> wrote:
> Send users mailing list submissions to
>         us...@open-mpi.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
>         users-requ...@open-mpi.org
> 
> You can reach the person managing the list at
>         users-ow...@open-mpi.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
> 
> 
> Today's Topics:
> 
>    1. Efficient polling for both incoming messages and  request
>       completion (Geoffrey Irving)
>    2. checkpoint problem (=?gb2312?B?s8LLyQ==?=)
>    3. Re: checkpoint problem (Reuti)
>    4. Re: Re :Re:  OpenMP and OpenMPI Issue (Paul Kapinos)
>    5. Re: issue with addresses (Iliev, Hristo)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Sun, 22 Jul 2012 15:01:09 -0700
> From: Geoffrey Irving <irv...@naml.us>
> Subject: [OMPI users] Efficient polling for both incoming messages and
>         request completion
> To: users <us...@open-mpi.org>
> Message-ID:
>         <CAJ1ofpdNxSVD=_ffn1j3kn9ktzjgjehb0xjf3eyl76ajwvd...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Hello,
> 
> Is it possible to efficiently poll for both incoming messages and
> request completion using only one thread?  As far as I know, busy
> waiting with alternate MPI_Iprobe and MPI_Testsome calls is the only
> way to do this.  Is that approach dangerous to do performance-wise?
> 
> Background: my application is memory constrained, so when requests
> complete I may suddenly be able to schedule new computation.  At the
> same time, I need to be responding to a variety of asynchronous
> messages from unknown processors with unknown message sizes, which as
> far as I know I can't turn into a request to poll on.
> 
> Thanks,
> Geoffrey
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Mon, 23 Jul 2012 16:02:03 +0800
> From: "=?gb2312?B?s8LLyQ==?=" <chens...@nscc-tj.gov.cn>
> Subject: [OMPI users] checkpoint problem
> To: "Open MPI Users" <us...@open-mpi.org>
> Message-ID: <4b55b3e5fc79bad3009c21962e848...@nscc-tj.gov.cn>
> Content-Type: text/plain; charset="gb2312"
> 
> &nbsp;Hi all,How can I create ckpt files regularly? I mean, do checkpoint 
> every 100 seconds. Is there any options to do this? Or I have to write a 
> script myself?THANKS,---------------CHEN SongR&amp;D DepartmentNational 
> Supercomputer Center in TianjinBinhai New Area, Tianjin, China
> -------------- next part --------------
> HTML attachment scrubbed and removed
> 
> ------------------------------
> 
> Message: 3
> Date: Mon, 23 Jul 2012 12:15:49 +0200
> From: Reuti <re...@staff.uni-marburg.de>
> Subject: Re: [OMPI users] checkpoint problem
> To: ?? <chens...@nscc-tj.gov.cn>,       Open MPI Users <us...@open-mpi.org>
> Message-ID:
>         <623c01f7-8d8c-4dcf-aa47-2c3eded28...@staff.uni-marburg.de>
> Content-Type: text/plain; charset=GB2312
> 
> Am 23.07.2012 um 10:02 schrieb ????:
> 
> > How can I create ckpt files regularly? I mean, do checkpoint every 100 
> > seconds. Is there any options to do this? Or I have to write a script 
> > myself?
> 
> Yes, or use a queuing system which supports creation of a checkpoint in fixed 
> time intervals.
> 
> -- Reuti
> 
> 
> > THANKS,
> >
> >
> >
> > ---------------
> > CHEN Song
> > R&D Department
> > National Supercomputer Center in Tianjin
> > Binhai New Area, Tianjin, China
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> 
> ------------------------------
> 
> Message: 4
> Date: Mon, 23 Jul 2012 12:26:24 +0200
> From: Paul Kapinos <kapi...@rz.rwth-aachen.de>
> Subject: Re: [OMPI users] Re :Re:  OpenMP and OpenMPI Issue
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <500d26d0.4070...@rz.rwth-aachen.de>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
> 
> Jack,
> note that support for THREAD_MULTIPLE is available in [newer] versions of open
> MPI, but disabled by default. You have to enable it by configuring, in 1.6:
> 
>    --enable-mpi-thread-multiple
>                            Enable MPI_THREAD_MULTIPLE support (default:
>                            disabled)
> 
> You may check the available threading supprt level by using the attaches 
> program.
> 
> 
> On 07/20/12 19:33, Jack Galloway wrote:
> > This is an old thread, and I'm curious if there is support now for this?  I 
> > have
> > a large code that I'm running, a hybrid MPI/OpenMP code, that is having 
> > trouble
> > over our infiniband network.  I'm running a fairly large problem (uses about
> > 18GB), and part way in, I get the following errors:
> 
> You say "big footprint"? I hear a bell ringing...
> http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
> 
> 
> 
> 
> 
> 
> 
> 
> --
> Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241/80-24915
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: mpi_threading_support.f
> Type: text/x-fortran
> Size: 411 bytes
> Desc: not available
> URL: 
> <http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment.bin>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 4471 bytes
> Desc: S/MIME Cryptographic Signature
> URL: 
> <http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment-0001.bin>
> 
> ------------------------------
> 
> Message: 5
> Date: Mon, 23 Jul 2012 11:18:32 +0000
> From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
> Subject: Re: [OMPI users] issue with addresses
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID:
>         
> <fdaa43115faf4a4f88865097fc2c3cc9030e2...@rz-mbx2.win.rz.rwth-aachen.de>
> 
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hello,
> 
> Placement of data in memory is highly implementation dependent. I assume you
> are running on Linux. This OS? libc (glibc) provides two different methods
> for dynamic allocation of memory ? heap allocation and anonymous mappings.
> Heap allocation is used for small data up to MMAP_TRESHOLD bytes in length
> (128 KiB by default, controllable by calls to ?mallopt(3)?). Such
> allocations end up at predictable memory addresses as long as all processes
> in your MPI job allocate memory following exactly the same pattern. For
> larger memory blocks malloc() uses private anonymous mappings which might
> end at different locations in the virtual address space depending on how it
> is being used.
> 
> What this has to do with your Fortran code? Fortran runtimes use malloc()
> behind the scenes to allocate automatic heap arrays as well as ALLOCATABLE
> ones. Small arrays are allocated on the stack usually and will mostly have
> the same addresses unless some stack placement randomisation is in effect.
> 
> Hope that helps.
> 
> Kind regards,
> Hristo
> 
> > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Priyesh Srivastava
> > Sent: Saturday, July 21, 2012 10:00 PM
> > To: us...@open-mpi.org
> > Subject: [OMPI users] issue with addresses
> >
> > Hello?
> >
> > I am working on a mpi program. I have been printing?the?addresses of
> different variables and arrays using the MPI_GET_ADDRESS command. What I
> have > noticed is that all the processors are giving the same address of a
> particular variable as long as the address is less than 2 GB size. When the
> address of a > variable/ array?is?more than 2GB size different processors
> are giving different addresses for the same variable. (I am working on a 64
> bit system and am using > the new MPI Functions and MPI_ADDRESS_KIND
> integers for getting?the?addresses).
> >
> > my question is that should?all?the processors give the same address for
> same variables? If so then why is this not happening for variables with
> larger addresses.
> >
> >
> > thanks
> > priyesh
> 
> --
> Hristo Iliev, Ph.D. -- High Performance Computing
> RWTH Aachen University, Center for Computing and Communication
> Rechen- und Kommunikationszentrum der RWTH Aachen
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 5494 bytes
> Desc: not available
> URL: 
> <http://www.open-mpi.org/MailArchives/users/attachments/20120723/abceb9c3/attachment.bin>
> 
> ------------------------------
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> End of users Digest, Vol 2304, Issue 1
> **************************************
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
Hristo Iliev, Ph.D. -- High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to