On Fri, Jul 21, 2017 at 1:52 PM, Ross Light <[email protected]> wrote:
> (Sorry for the long response. I did not have time to make it shorter.) > > I see your point about how "a call returns happens before the next call > starts" reaches an undesirable state for applications. I had an inkling > that this could be the case, but hadn't fully experimented to see the > results. However, just on terminology, I'm not sure I agree with your > assessment that objects are single-threaded: because returns can come back > in a different order than the calls arrived, this implies some level of > concurrent (but perhaps not parallel) execution. > To clarify, what I meant was that Cap'n Proto is based on the actor model. In the basic actor model, an actor repeatedly receives messages and, in response to each message, may update its state and may send additional messages to other actors (perhaps a reply, perhaps not). These message handlers are sequential (only one runs at a time) and are non-blocking. Conceptually, each actor has an internal event loop and only does one thing at a time. But this doesn't mean the actor model is single-threaded: multiple actors can be executing simultaneously. Since each only has access to its own state, there's no need for complex synchronization. Cap'n Proto extends the model by making request/response pairings explicit, but it doesn't require that a response be sent before a new request arrives. Technically, it's common under Cap'n Proto implementations for one actor to implement multiple objects (where an object is an endpoint for messages, i.e. what a capability points to) -- or, put another way, multiple objects may share the same event loop, and thus their event handlers are serialized. In C++ in particular, currently most (all?) Cap'n Proto applications are single-threaded, hence the entire process acts like one actor. But what I'd like to do (in C++) in the future is make it easy to have multiple actors (and thus multiple threads) in a process, each possibly handling multiple objects. > As for your idea of mapping Cap'n Proto methods to messages on Go > channels: it shuffles the problem around a bit but doesn't escape this > deadlock issue. In fact, the first draft I had of the RPC system used a > lot more channels, but I found it made the control flow hard to reason > about (but it could still be implemented this way). Let me give you enough > background on how Go concurrency works so that we're talking with each > other. > > *Background* > Thanks, I understand the issue better now. Let me know if this is correct: Go's channels don't really map to the actor model in the way I imagined, because channels in Go are really used for one-way messages, whereas when you have request/response, it's considered more idiomatic to use a blocking function call. If you were trying to match the actor model, you would send a call message over a channel, and include in that call message a new channel to which the response is to be sent. You'd then use a select block or a goroutine to wait for those responses at the same time as waiting for further calls. But that's not how people usually write things in Go, perhaps because it makes for difficult-to-follow code. But indeed, it seems awkward to support a threaded model with concurrent calls while also supporting e-order, since you now need to convince people to explicitly acknowledge calls. If you make it explicitly illegal to call back into the RPC system before acknowledging the current call -- e.g. panicking if they do -- then programmers ought to notice the mistake quickly. Alternatively, what if making a new call implicitly acknowledged the current call? This avoids cognitive overhead and probably produces the desired behavior? On Sat, Jul 22, 2017 at 8:58 PM, Ross Light <[email protected]> wrote: > I have been thinking about this more, and I think I have a solution. > Instead of making the call to the capability directly in the critical > section, the connection could have a goroutine that receives calls on a > buffered channel. Importantly, the send would abort if the queue is full, > so that it never blocks. The effect would be that any calls would be > deferred until after the critical section, but they would have the same > order. While it still introduces a queue, it's only one per connection, > which is less troublesome to me. > Hmm, do calls on separate objects potentially block each other? Note that normally E-order only applies within a single object; calls on independent objects need not be ordered. (However, in practice I do think there are some use cases for defining "object groups" that have mutual e-order, but this is not the original guarantee that E gave.) What happens when the queue is full? Do you start rejecting calls? Or do you stop reading from the connection? That could, of course, also lead to deadlock, if a string of dependent calls are bouncing back and forth. I wonder if a missing piece here is some way to apply backpressure on a single object without applying to the whole connection. -Kenton -- You received this message because you are subscribed to the Google Groups "Cap'n Proto" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. Visit this group at https://groups.google.com/group/capnproto.
