Correct. There's no happens-before relationship between goroutines without
an explicit synchronization point.

I have been thinking about this more, and I think I have a solution.
Instead of making the call to the capability directly in the critical
section, the connection could have a goroutine that receives calls on a
buffered channel. Importantly, the send would abort if the queue is full,
so that it never blocks. The effect would be that any calls would be
deferred until after the critical section, but they would have the same
order. While it still introduces a queue, it's only one per connection,
which is less troublesome to me.

I'll investigate a bit more.

On Sat, Jul 22, 2017, 8:01 PM David Renshaw <[email protected]> wrote:

> On Sat, Jul 22, 2017 at 10:49 PM, David Renshaw <[email protected]>
> wrote:
>
>> Would it work for the go-capnproto2 RPC system to spawn a new goroutine
>> for each method invocation? Then the user-supplied method implementation
>> code would always get executed on a separate goroutine from the object's
>> main loop, avoiding the deadlock. (There would be no "start the work"
>> phase.) Admittedly, one downside to this approach is that method
>> implementations would no longer get direct access to user-defined object
>> state (also known as "this" or "self"). However, method implementations
>> could probably still get a mutex-protected reference to the object state,
>> and maybe that's good enough.
>>
>>
> I suppose that another problem with this is that Go's scheduler might
> scramble the order of method calls between the time when they are
> dispatched to separate goroutines and the time when they are actually
> executed.
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
Visit this group at https://groups.google.com/group/capnproto.

Reply via email to