Dear Eugene,
Thanks a lot for the answer you were right for the eager mode.
I have one more question. I am looking for an official tool to measure
the ping time, just sending a message of 1 byte or more and measure the
duration of the MPI_Send command on the rank 0 and the duration of the
MPI_Recv on rank 1. I would like to know any formal tool because I am
using also SkaMPI and the results really depend on the call of the
synchronization before the measurement starts.
So for example with synchronizing the processors, sending 1 byte, I have:
rank 0, MPI_Send: ~7 ms
rank 1, MPI_Recv: ~52 ms
where 52 ms is almost the half of the ping-pong and this is ok.
Without synchronizing I have:
rank 0, MPI_Send: ~7 ms
rank 1, MPI_Recv: ~7 ms
However I developed a simple application where the rank 0 sends 1000
messages of 1 byte to rank 1 and I have almost the second timings with
the 7 ms. If in the same application I add the MPI_Recv and MPI_Send
respectively in order to have a ping-pong application then the ping-pong
duration is 100ms (like SkaMPI). Can someone explain me why is this
happening? The ping-pong takes 100 ms and the ping without
synchronization takes 7 ms.
Thanks a lot,
Best regards,
George Markomanolis
Message: 1
Date: Thu, 18 Nov 2010 10:31:40 -0800
From: Eugene Loh <eugene....@oracle.com>
Subject: Re: [OMPI users] Making MPI_Send to behave as blocking for
all the sizes of the messages
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <4ce5710c.8030...@oracle.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Try lowering the eager threshold more gradually... e.g., 4K, 2K, 1K,
512, etc. -- and watch what happens. I think you will see what you
expect, except once you get too small then the value is ignored
entirely. So, the setting just won't work at the extreme value (0) you
want.
Maybe the thing to do is convert your MPI_Send calls to MPI_Ssend
calls. Or, compile in a wrapper that intercepts MPI_Send calls and
implements them by calling PMPI_Ssend.
George Markomanolis wrote:
Dear all,
I am trying to disable the eager mode in OpenMPI 1.3.3 and I don't see
a real difference between the timings.
I would like to execute a ping (rank 0 sends a message to rank 1) and
to measure the duration of the MPI_Send on rank 0 and the duration of
MPI_Recv on rank 1. I have the following results.
Without changing the eager mode:
bytes MPI_Send (in msec) MPI_Recv (in msec)
1 5.8 52.2
2 5.6 51.0
4 5.4 51.1
8 5.6 51.6
16 5.5 49.7
32 5.4 52.1
64 5.3 53.3
with disabled the eager mode:
ompi_info --param btl tcp | grep eager
MCA btl: parameter "btl_tcp_eager_limit" (current value: "0", data
source: environment)
bytes MPI_Send (in msec) MPI_Recv (in msec)
1 5.4 52.3
2 5.4 51.0
4 5.4 52.1
8 5.4 50.7
16 5.0 50.2
32 5.1 50.1
64 5.4 52.8
However I was expecting that with disabled the eager mode the duration
of MPI_Send should be longer. Am I wrong? Is there any option for
making the MPI_Send to behave like blocking command for all the sizes
of the messages?
Thanks a lot,
Best regards,
George Markomanolis
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users