First of all, this whole argument should not even exist for the
following reason:
Page registrations are supposed to be *rare* - once a page is registered, it
is registered for life. There is nothing in the design that says a page must
be "unregistered" and I do not believe anybody is proposing that.
Second, this means that my previous analysis showing that performance
was reduced
was also incorrect because most of the RDMA transfers were against pages
during
the bulk phase round, which incorrectly makes dynamic page registration
look bad.
I should have done more testing *after* the bulk phase round,
and I apologize for not doing that.
Indeed when I do such a test (with the 'stress' command) the cost of
page registration disappears
because most of the registrations have already completed a long time ago.
Thanks, Paolo for reminding us about the bulk-phase behavior to being with.
Third, this means that optimizing this protocol would not be helpful and
that we should
follow the "keep it simple" approach because during steady-state phase
of the migration
most of the pages should have already been registered.
- Michael
On 04/11/2013 10:37 AM, Michael S. Tsirkin wrote:
Answer above.
Here's how things are supposed to work in a pipeline:
req -> registration request
res -> response
done -> rdma done notification (remote can unregister)
pgX -> page, or chunk, or whatever unit is used
for registration
rdma -> one or more rdma write requests
pg1 -> pin -> req -> res -> rdma -> done
pg2 -> pin -> req -> res -> rdma -> done
pg3 -> pin -> req -> res -> rdma -> done
pg4 -> pin -> req -> res -> rdma -> done
pg4 -> pin -> req -> res -> rdma -> done
It's like a assembly line see? So while software does the registration
roundtrip dance, hardware is processing rdma requests for previous
chunks.
....
When do you have to stall? when you run out of rx buffer credits so you
can not start a new req. Your protocol has 2 outstanding buffers,
so you can only have one req in the air. Do more and
you will not need to stall - possibly at all.
One other minor point is that your protocol requires extra explicit
ready commands. You can pass the number of rx buffers as extra payload
in the traffic you are sending anyway, and reduce that overhead.