On 07/04/16 14:37, Simone Bordet wrote:
Hi,

On Thu, Apr 7, 2016 at 3:27 PM, Michael McMahon
<michael.x.mcma...@oracle.com> wrote:
At the lowest level the implementation uses two threads per TCP connection,
one for reading and one for writing.
Ugh.

I do plan to change that to dispatch from
the selector
and execute the reads and writes asynchronously. It's not a particularly big
change,
but would prefer to do it after the first integration of this code.
Well, in my experience this kind of change has huge implications.
Perhaps you are right that in your case is not big, but I would not
take it lightly, not even for a moment.


Well the lowest layer on the read side doesn't do anything
blocking while processing incoming frames. When frames
are passed up to stream layer, there may be a possibility of
blocking and that is handled there.

So, the current structure is something like

while (!closed) {
    read a frame
    process it
}

will change to

Dispatch from selector --->   read a frame and then process it.

The Stream layer above this behaves much the same as in Http/1.1. where it
uses a single thread invocation for sending each request body and receiving
each
response body. So, it's not exactly blocking. Nor is it entirely
asynchronous/non blocking.
And it's really confusing to review :/

Maybe you can clarify why it has been implemented in this hybrid way,
to then rewrite it asynchronously ?
It will help reviewers (at least me) to understand the implementation
choices (which now appear as "wtf" moments).

I originally wrote it attempting to be fully asynchronous. But, it was
too complex and hard to understand. It seems a reasonable
compromise to assume that once you start reading the first byte
of a frame/response or response body, it's ok to dedicate a thread
to completing the operation, even if in theory it could block while
doing that.

Michael

Reply via email to