> On 6 Apr 2016, at 21:28, Roger Riggs <roger.ri...@oracle.com> wrote:
> 
>>> Reader.java:
>>> - line 137:  Why should the reader force a new task?  Should it be left to 
>>> the subscriber to
>>>    decide whether its processing is lightweight and can be done immediately 
>>> or to re-submit to its own executor.
>>> 
>> I was thinking about getting Reader back to reading ASAP, and not 
>> _potentially_
>> wasting its time in the subscriber. I might rethink this. It is a decoupling 
>> obsession :-)
>> Thanks for pointing this out.
>> 
> I don't think there's an issue with throughput in the Reader, the OS is 
> buffering and the work of the 
> reader is copying from the system's internal buffers to the directbyte 
> buffer, a cpu bound task.
> 
> Also consider that in delivering a buffer to a subscriber the caller has no 
> idea how much work
> the subscriber needs to do;  It may be trivial or a lot.  In either case, the 
> subscriber is in the
> best position to decide if a separate task is warranted.  (At least in the 
> general case).
> Except for the callout to the Listener; all the tasks are wholely known to 
> the WS implementation.
> I think the listener should be given the choice about whether to create a new 
> task.
> The recommendation can be to return from listener callbacks promptly; but its 
> their app if they don't take the advice.

Good point. Pretty much the same thing was mentioned by Simone earlier.

>>> java.net.Utils:
>>> - line 231:  SATURATING_INCREMENT (and its uses) is wasted code; long is 
>>> not going to roll over.
>>> 
>> It's in order to go with the flow :-) I mean Flow/Reactive Streams API. I
>> believe it makes the implementation more robust. As [1] mentions demand 
>> greater
>> than Long.MAX_VALUE. It's a corner case anyway.
>> 
>> Would you suggest throwing an IAE in case where the implementation detects
>> overflow?
>> 
> No, it is NOT going to overflow;  it can handle 1Gb/sec for 292 years.  
> Imagine one connection staying in use for that long.

Roger, I totally understand this reasoning. I'm not worried about physical
ability of any modern system to exhaust this range in a reasonable time. The
thing I'm talking about is this: it's better when the user doesn't have to think
about overflow ever. So the following sequence of calls would be perfectly fine,
hiding the levelling off at Long.MAX_VALUE:

   ws.request(Long.MAX_VALUE);
   ws.request(Long.MAX_VALUE);
   ws.request(Long.MAX_VALUE);

Interestingly enough, java.util.concurrent.SubmissionPublisher have the same
approach:

        /**
         * Adds to demand and possibly starts task.
         */
        public void request(long n) {
            if (n > 0L) {
                for (;;) {
                    long prev = demand, d;
                    if ((d = prev + n) < prev) // saturate
                        d = Long.MAX_VALUE;
                    if (U.compareAndSwapLong(this, DEMAND, prev, d)) {


I wonder what was the main reason for JSR-166 folks to do like this?

>>> - line 236: NULL_BYTE_BUFFER isn't null - prehaps rename to 
>>> EMPTY_BYTE_BUFFER
>>> 
>> Thanks. It's been already mentioned by Andrej Golovnin. The name is bad, 
>> sorry
>> about that (though I referred to the Null Object pattern, rather than to null
>> literal).
>> 
>> I don't think EMPTY is a perfect name. What is an *empty* ByteBuffer? In my
>> opinion it's not necessarily the one that has a capacity of 0. It's the one 
>> that
>> reports hasRemaining() == false. I want to reflect both qualities in the
>> buffer's name. But maybe I'm overcomplicating this naming thing.
>> 
> Empty = no bytes;  either way I think you can do without it by clean use of 
> the offset in the array of ByteBuffers.
> Its internal so it doesn't matter so much.

okay

Thanks for your time, Roger!

Reply via email to