P.s. I just found out you have to recompile/relink the MPI code with -g in
order for the File/Address field to show non-garbage.


On 3/30/07 2:43 PM, "Heywood, Todd" <heyw...@cshl.edu> wrote:

> George,
> 
> It turns out I didn't have libunwind either, but didn't notice since mpiP
> compiled/linked without it (OK, so I should have checked the config log).
> However, once I got it it wouldn't compile on my RHEL system.
> 
> So, following this thread:
> 
> http://www.mail-archive.com/libunwind-devel@nongnu.org/msg00067.html
> 
> I had to download an alpha version of libunwind:
> 
> http://download.savannah.nongnu.org/releases/libunwind/libunwind-snap-070224
> .tar.gz
> 
> ... And build it with:
> 
> CFLAGS=-fPIC ./configure
> make CFLAGS=-fPIC LDFLAGS=-fPIC shared
> make CFLAGS=-fPIC LDFLAGS=-fPIC install
> 
> After that, everything went as you described. The "strange readings" in the
> output did list the Parent_Funct's though:
> 
> ---------------------------------------------------------------------------
> @--- Callsites: 5 ---------------------------------------------------------
> ---------------------------------------------------------------------------
>  ID Lev File/Address        Line Parent_Funct             MPI_Call
>   1   0 0x000000000041341d       RecvData                 Recv
>   2   0 0x00000000004133c7       SendData                 Send
>   3   0 0x00000000004134b9       SendRepeat               Send
>   4   0 0x0000000000413315       Sync                     Barrier
>   5   0 0x00000000004134ef       RecvRepeat               Recv
> 
> 
> Thanks for the help!
> 
> Todd
> 
> 
> On 3/29/07 5:48 PM, "George Bosilca" <bosi...@cs.utk.edu> wrote:
> 
>> I used it on a IA64 platform, so I supposed x86_64 is supported, but
>> I never use it on an AMD 64. On the mpiP webpage they claim they
>> support the Cray XT3, which as far as I know are based on AMD Opteron
>> 64 bits. So, there is at least a spark of hope in the dark ...
>> 
>> I decide to give it a try on my x86_64 AMD box (Debian based system).
>> First problem, my box didn't have the libunwind. Not a big deal, it's
>> freely available on HP website (http://www.hpl.hp.com/research/linux/
>> libunwind/download.php4). Few minutes later, the libunwind was
>> installed in /lib64. Now, time to focus on mpiP ... For some obscure
>> reason the configure script was unable to detect my g77 compiler
>> (whatever!!!) nor the installation of libunwind. Moreover, it keep
>> trying to use the clock_gettime call. Fortunately (which make me
>> think I'm not the only one having trouble with this), mpiP provide
>> configure options for all these. The final configure line was: ./
>> configure --prefix=/opt/ --without-f77 --with-wtime --with-include=-I/
>> include --with-lib=-L/lib64. Then a quick "make shared" followed by
>> "make install", complete the work. So, at least mpiP can compile on a
>> x86_64 box.
>> 
>> Now, I modify the makefile of NetPIPE, and add the "-lmpiP -lunwind",
>> compile NetPIPE and run it. The mpiP headers showed up, the
>> application run to completion and my human readable output was there.
>> 
>> @ mpiP
>> @ Command : ./NPmpi
>> @ Version                  : 3.1.0
>> @ MPIP Build date          : Mar 29 2007, 13:35:47
>> @ Start time               : 2007 03 29 13:43:40
>> @ Stop time                : 2007 03 29 13:44:42
>> @ Timer Used               : PMPI_Wtime
>> @ MPIP env var             : [null]
>> @ Collector Rank           : 0
>> @ Collector PID            : 22838
>> @ Final Output Dir         : .
>> @ Report generation        : Single collector task
>> @ MPI Task Assignment      : 0 dancer
>> @ MPI Task Assignment      : 1 dancer
>> 
>> However, I got some strange reading inside the output.
>> ------------------------------------------------------------------------
>> ---
>> @--- Callsites: 5
>> ---------------------------------------------------------
>> ------------------------------------------------------------------------
>> ---
>> ID Lev File/Address        Line Parent_Funct             MPI_Call
>>    1   0 0x0000000000402ffb       [unknown]                Barrier
>>    2   0 0x0000000000403103       [unknown]                Recv
>>    3   0 0x00000000004030ad       [unknown]                Send
>>    4   0 0x000000000040319f       [unknown]                Send
>>    5   0 0x00000000004031d5       [unknown]                Recv
>> 
>> I didn't dig further to see why. But, this prove that for at least a
>> basic usage (general statistics gathering) mpiP works on x86_64
>> platforms.
>> 
>>    Have fun,
>>      george.
>> 
>> On Mar 29, 2007, at 11:32 AM, Heywood, Todd wrote:
>> 
>>> George,
>>> 
>>> Any other simple, small, text-based (!) suggestions? mpiP seg
>>> faults on
>>> x86_64, and indeed its web page doesn't list x86_64 Linux as a
>>> supported
>>> platform.
>>> 
>>> Todd
>>> 
>>> 
>>> On 3/28/07 10:39 AM, "George Bosilca" <bosi...@cs.utk.edu> wrote:
>>> 
>>>> Stephen,
>>>> 
>>>> There are a huge number of MPI profiling tools out there. My
>>>> preference will be something small, fast and where the output is in
>>>> human readable text format (and not fancy graphics). The tools I'm
>>>> talking about is called mpiP (http://mpip.sourceforge.net/). It's not
>>>> Open MPI specific, but it's really simple to use.
>>>> 
>>>>    george.
>>>> 
>>>> On Mar 28, 2007, at 10:10 AM, stephen mulcahy wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> What is the best way of getting statistics on the size of MPI
>>>>> messages
>>>>> being sent/received by my OpenMPI-using application? I'm guessing
>>>>> MPE is
>>>>> one route but is there anything built into OpenMPI that will give me
>>>>> this specific statistic?
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> -stephen
>>>>> 
>>>>> -- 
>>>>> Stephen Mulcahy, Applepie Solutions Ltd, Innovation in Business
>>>>> Center,
>>>>>     GMIT, Dublin Rd, Galway, Ireland.      http://www.aplpi.com
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> 
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> 
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> "Half of what I say is meaningless; but I say it so that the other
>> half may reach you"
>>                                    Kahlil Gibran
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to