On 28/09/15 11:23, Simone Bordet wrote:
Hi,

On Mon, Sep 28, 2015 at 11:38 AM, Michael McMahon
<michael.x.mcma...@oracle.com> wrote:
if the units are ByteBuffers say, then how do you convert a notification
of n units into a HTTP/2 window update (which has to be in bytes)?
The implementation always has this information because it's the one
that passed the ByteBuffer to the application, so it would be trivial
for the implementation to make this conversion.
Trivial for n = 1 perhaps. For n>1 the implementation must keep track
of the size of each ByteBuffer (the number of bytes contained in each)
passed to the application and as each buffer is acknowledged, the window
can be updated by the corresponding number of bytes. So, while the buffers
themselves are typically of fixed size, the amount of data contained might not be as each probably corresponds to a packet on the network (which can vary in size obviously).
I think it's a little odd, but is doable.

Another question is what value would be application/response processor use
as the initial window size? Without knowing something about the likely buffer
size, it is hard to know what to use (except the value 1).

Perhaps, if n is required always to be equal to 1, then you could do it
because you would always be referring to the last buffer received
but it seems trickier if n > 1.
Consider onRequestBodyChunk(ByteBuffer) as Flow.Subscriber.onNext(ByteBuffer).
What an application wants to say here is "give me another buffer", not
"give me 42 bytes".

The 42 bytes are irrelevant because the implementation won't certainly
read 42 bytes *only* from the network: it typically has buffers of
certain size that it reuses for network reads.

The application will always be in control if it really needs 42 bytes,
whatever length is the buffer that the implementation passes to it.
It can buffer the bytes if they're less than 42, it can slice the
buffer if they're more than 42, it can do whatever it needs to do to
comply with the 42 bytes requirement.
What it needs from the implementation is just another ByteBuffer.

What I was saying was that completion of response also implies
competion of request (ie that it isn't notified until after both happens)
Okay.

Why would it break the contract? Note, I am referring to
HttpRequest.cancel() not CompletableFuture.cancel()
Okay.
Still, there is no way to specify an async timeout (and the others I mentioned).

Consider this:

try
{
     response = responseFuture.get(5, SECONDS);
}
catch (TimeoutException x)
{
     // The response may have arrived after 5.1 s, but before we
execute the code below.

     // Aborting a request that is instead successful !
     boolean cancelled = request.cancel();
     if (!cancelled)
     {
         // May be succeeded or failed, try again.
         try
         {
             response = response.get();
         }
         catch (Exception xx)
         {
             // Yuck
         }
     }
}

Note that HttpRequest.cancel() does not return a boolean to know
whether the request has been really cancelled, or if it arrived in the
meanwhile.
I think it should (must).

Now compare to this:

try
{
     response = client.newRequest(uri).timeout(5,
SECONDS).method("GET").response();
}
catch (TimeoutException x)
{
     // Request already atomically aborted by the implementation.
}

Note that the latter solution will work also for async timeouts:

client.newRequest(uri).timeout(5,
SECONDS).method("GET").responseAsync().handle((r, x) ->
{
     if (x instance of TimeoutException)
         ;// handle timeout
});

The API to specify timeouts is the same for both sync and async cases.

The idiom an application writer has to write when using
Future.get(time) is just cumbersome, I would advice against any usage
of Future methods in the API.
CompletableFuture is fine, as long as you don't use the Future methods.
If an application wants to use Future methods, they are on their own,
but at least they are not forced to.


We might reconsider this. Currently there is no timeout for blocking
calls, which I guess is problematic.

Thanks
Michael

Reply via email to