So in this old thread
<https://groups.google.com/d/topic/capnproto/4SLfibQPWFE/discussion>, it's
stated that the "call is received" event requires calling into application
code.  From an implementation standpoint, this is declaring that receiving
a call in the RPC system is a critical section that involves crossing over
into application code boundary, which may try to acquire the same mutex (by
making a call on the connection).  While you can postpone this problem by
adding queueing, I'm already a little nervous about how much queueing is
required by the protocol.

I'd like to suggest that the E model be considered: each capability is a
separate single queue.  Instead of "call A is received happens before call
B is received", "call A returns happens before call B starts".  The reason
this simplifies implementation is that because it prescribes what ought to
happen in the critical section (enqueue or throw an overload exception),
then no application needs to be invoked in the critical section.  This
might not be a problem for the C++ implementation right now, but once
fibers are involved, I think it would become one.

I believe that the same properties can be obtained by pushing this into
interface definitions: if an interface really wants to declare that
operations can happen in parallel, then there can be a root capability that
creates a capability for each operation.  Then the RPC subsystem can know
much more about how much work is being scheduled.

I realize this would be a big change, but I can't see a good way to avoid
this problem in any implementation of the RPC protocol that tries to use a
connection concurrently.  Effectively, this forces all implementations to
be single-threaded AFAICT.  Let me know what you think.

-Ross

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
Visit this group at https://groups.google.com/group/capnproto.

Reply via email to