[OMPI users] MPI_Recv operation time

2012-11-05 Thread huydanlin
Hi,
   My objective is I want to calculate the time perform by MPI_Send &
MPI_Recv . In case MPI_Send, i can put the timer before the MPI_Send and
after its. like this "
t1=MPI_Wtime(),
MPI_Send
t2= MPI_Wtime
tsend= t2 -t1; it mean when the message go to the system buffer, the
control return to the sending process. So I can measure the MPI_Send.
   But my problem in MPI_Recv. If i do like MPI_Send( put the timer before
and after MPI_Recv) I think it wrong. Because we dont know exactly when the
message reach the system buffer in receiving side.
So how can we measure the MPI_Recv operation time( time when the
message is copied from the system buffer to the receive buffer) ?

Thanks


Re: [OMPI users] MPI_Recv operation time

2012-11-05 Thread Ralph Castain
You might take a look at the other profiling tools:

http://www.open-mpi.org/faq/?category=perftools#OMPI-support


On Nov 5, 2012, at 1:07 AM, huydanlin  wrote:

> Hi, 
>My objective is I want to calculate the time perform by MPI_Send & 
> MPI_Recv . In case MPI_Send, i can put the timer before the MPI_Send and 
> after its. like this "
> t1=MPI_Wtime(), 
> MPI_Send 
> t2= MPI_Wtime 
> tsend= t2 -t1; it mean when the message go to the system buffer, the control 
> return to the sending process. So I can measure the MPI_Send. 
>But my problem in MPI_Recv. If i do like MPI_Send( put the timer before 
> and after MPI_Recv) I think it wrong. Because we dont know exactly when the 
> message reach the system buffer in receiving side. 
> So how can we measure the MPI_Recv operation time( time when the message 
> is copied from the system buffer to the receive buffer) ? 
> 
> Thanks
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] MPI_Recv operation time

2012-11-05 Thread Eugene Loh

On 11/5/2012 1:07 AM, huydanlin wrote:

Hi,
   My objective is I want to calculate the time perform by MPI_Send & 
MPI_Recv . In case MPI_Send, i can put the timer before the MPI_Send 
and after its. like this "

t1=MPI_Wtime(),
MPI_Send
t2= MPI_Wtime
tsend= t2 -t1; it mean when the message go to the system buffer, the 
control return to the sending process.
It means that the message is out of the user's send buffer.  The time 
could include a rendezvous with the receiving process.  Depending on 
what mechanism is used, a send (e.g., of a long message) might not be 
able to complete until most of the message is already in the receiver's 
buffer.

So I can measure the MPI_Send.
   But my problem in MPI_Recv. If i do like MPI_Send( put the timer 
before and after MPI_Recv) I think it wrong. Because we dont know 
exactly when the message reach the system buffer in receiving side.
So how can we measure the MPI_Recv operation time( time when the 
message is copied from the system buffer to the receive buffer) ?
You cannot if you're instrumenting the user's MPI program.  If you want 
to time the various phases of how the message was passed, you would have 
to introduce timers into the underlying MPI implementation.


Re: [OMPI users] OpenMPI 1.7rc5 fails to build with CUDA support when CUDA is in a non-standard location

2012-11-05 Thread Matthias Jurenz
Hello Adam,

thanks for the hint! The upcoming v1.7 release (or candidate) will include a 
fix for this problem.

With regards,
Matthias


[OMPI users] gathering problem

2012-11-05 Thread Hodge, Gary C
I continue to have a problem where 2 processes are sending to the same process 
and one of the sending processes hangs for 150 to 550 ms in the call to 
MPI_Send.

Each process runs on a different node and the receiving process has posted an 
MPI_Irecv 17 ms before the hanging send.
The posted receives are for 172K buffers and the sending processes are sending 
81K size messages.
I have set mpi_leave_pinned to 1 and have increased the 
btl_openib_receive_queues to ...:S,65536,512,256,64

How do I trace the various phases of message passing to diagnose where the send 
is hanging up?




ompi-output.tar.bz2
Description: ompi-output.tar.bz2


Re: [OMPI users] [Open MPI Announce] Open MPI v1.6.3 released

2012-11-05 Thread Orion Poplawski

On 11/03/2012 02:38 PM, Jeff Squyres wrote:

Crud.  You are correct.

The VERSION string for the f90 library was incorrectly updated to 4:0:1; it 
should have been updated to 4:0:3.

I have fixed this for v1.6.4.  I'm *anticipating* that there aren't may people 
who will be bitten by this, so for the time being, at least, I'm publishing 
this workaround:

1. Download Open MPI v1.6.3
2. Untar it, configure it
3. BEFORE you build it (but AFTER you ran configure!), edit 
ompi/mpi/f90/Makefile
4. Change line 1212 from

libmpi_f90_so_version = 4:0:1

to

libmpi_f90_so_version = 4:0:3

5. Then make the "all" and "install" targets as usual.

If this workaround suffices for those affected, I'd prefer to release v1.6.4 
with this fix after Supercomputing (i.e., early/mid December).  Please let me 
know.


This plan works for me, although the attached change applied before configure 
seems more straightforward (it's what I'm doing in the Fedora package).



--
Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA, Boulder Office  FAX: 303-415-9702
3380 Mitchell Lane   or...@nwra.com
Boulder, CO 80301   http://www.nwra.com
diff -up openmpi-1.6.3/VERSION.f90sover openmpi-1.6.3/VERSION
--- openmpi-1.6.3/VERSION.f90sover	2012-10-24 09:37:48.0 -0600
+++ openmpi-1.6.3/VERSION	2012-11-05 10:36:14.904136788 -0700
@@ -82,7 +82,7 @@ date="Oct 24, 2012"
 libmpi_so_version=1:6:0
 libmpi_cxx_so_version=1:1:0
 libmpi_f77_so_version=1:6:0
-libmpi_f90_so_version=4:0:1
+libmpi_f90_so_version=4:0:3
 libopen_rte_so_version=4:3:0
 libopen_pal_so_version=4:3:0