Sorry, the figures do not display. They are attached to this message.
On Wed, May 18, 2016 at 3:24 PM, Xiaolong Cui wrote:
> Hi Nathan,
>
> I got one more question. I am measuring the number of messages that can be
> eagerly sent with a given SRQ. Again, as illustrated below, my program has
> tw
Hi Nathan,
I got one more question. I am measuring the number of messages that can be
eagerly sent with a given SRQ. Again, as illustrated below, my program has
two ranks, rank 0 sends a variable number (*n*) of messages to rank 1 who
is not ready to receive.
[image: Inline image 1]
I measured t
Thanks a lot!
On Tue, May 17, 2016 at 11:49 AM, Nathan Hjelm wrote:
>
> I don't know of any documentation on the connection manager other than
> what is in the code and in my head. I rewrote a lot of the code in 2.x
> so you might want to try out the latest 2.x tarball from
> https://www.open-m
I don't know of any documentation on the connection manager other than
what is in the code and in my head. I rewrote a lot of the code in 2.x
so you might want to try out the latest 2.x tarball from
https://www.open-mpi.org/software/ompi/v2.x/
I know the per-peer queue pair will prevent totally a
I think it is the connection manager that blocks the first message. If I
add a pair of send/recv at the very beginning, the problem is gone. But
removing the per-peer queue pair does not help.
Do you know any document that discusses the open mpi internals, especially
related to this problem?
On T
If it is blocking on the first message then it might be blocked by the
connection manager. Removing the per-peer queue pair might help in that
case.
-Nathan
On Mon, May 16, 2016 at 10:11:29PM -0400, Xiaolong Cui wrote:
>Hi Nathan,
>Thanks for your answer.
>The "credits" make sense f
Hi Nathan,
Thanks for your answer.
The "credits" make sense for the purpose of flow control. However, the
sender in my case will be blocked even for the first message. This doesn't
seem to be the symptom of running out of credits. Is there any reason for
this? Also, is there a mac parameter for t
When using eager_rdma the sender will block once it runs out of
"credits". If the receiver enters MPI for any reason the incoming
messages will be placed in the ob1 unexpected queue and the credits will
be returned to the sender. If you turn off eager_rdma you will probably
get different results.
Hi,
I am using Open MPI 1.8.6. I guess my question is related to the flow
control algorithm for small messages. The question is how to avoid the
sender being blocked by the receiver when using *openib* module for small
messages and using *blocking send*. I have looked through this FAQ(
https://www