Hi Pavel,

I was wondering if you could address the concerns I raised about it not
actually using Reactive Streams. It would be great if, having created a
WebSocket, a user could just plumb it to another reactive streams
source/sink without writing any glue code, but with this, they're going to
have to write a Publisher/Subscriber first that wraps the current API. I
understand why the current API exists, it's because there's different types
of messages, binary, text, ping/pong, and I think it makes sense to keep
that API for simple use cases. But it would be nice if in addition, a
Publisher/Subscriber API was provided, otherwise application developers
using reactive streams are going to be reimplementing the same thing every
time they need it.

Regards,

James



On 11 February 2018 at 20:24, Pavel Rappo <pavel.ra...@oracle.com> wrote:

>
> > On 9 Feb 2018, at 18:16, Chuck Davis <cjgun...@gmail.com> wrote:
> >
> > I've been using jdk8 websockets to develop my desktop java
> > applications.  Now that jdk9 is on my machine I started looking at
> > websockets and I'm not at all sure I like what I see.  Can someone
> > familiar with this feature please explain the rationale for what is
> > happening?
> >
> > I'm concerned, at this initial stage, primarily by
> > WebSocket.request(long).  This "feature" seems to have at least two
> > very negative impacts:
> >
> > 1)  It appears to destroy the asynchronous nature of websockets;
> > 2)  It appears to place programmers in the impossible position of
> > guessing how many messages the server side might send.
> >
> > 1)  If everything has to stop while the client asks for more messages
> > asynchronous communication is nullified.  The jdk8 implementation is,
> > therefore, much more functional.
> >
> > 2)  It would appear the only logical use of WebSocket.request() would
> > be to use a long value of Long.MAX_VALUE since there is no way to know
> > how many messages may be received.  And what if the programmer asks
> > for 1 message and the next message has 3 parts.  We're screwed.
> > Additionally, the documentation specifically states that the
> > WebSocket.Listener does not distinguish between partial and whole
> > messages.  The jdk8 implementation decoders/encoders accumulate
> > messages and assemble them until the message is complete and then pass
> > it to the Endpoint -- much more satisfactory arrangement.
> >
> > I cannot fathom the meaning or improvement of this new wrinkle.
> >
> > Thanks for any insight
>
> Hello Chuck,
>
> James is right and this API is heavily influenced by the Reactive Streams
> [1].
> It is not about how many messages the peer is willing to send, it is about
> how
> many messages you are ready to receive. Because otherwise you leave things
> to
> chance. As without specific measures one should not expect a consumer and a
> producer to have the same throughput.
>
> Correct me if I'm wrong, but when you say that WebSocket has an
> asynchronous
> nature, I guess what you actually mean is that each peer can send data at
> will.
> A peer can be sending and receiving data simultaneously. This would rather
> be a
> bidirectional and full-duplex communication. It is the API that could be
> asynchronous. The API in question if fully asynchronous. A method that
> sends a
> message does not wait for the message to be sent, instead it returns a
> pending
> result of the operation. The same is true for requesting messages.
> WebSocket.request(long) acts asynchronously. It simply records how many
> extra
> messages the client is ready to receive. Finally, each of the listener's
> methods
> asks the user to signal when the user has processed the message it bears
> [2].
>
> The rationale behind this is to allow to avoid unnecessary copying of data
> between buffers when this is considered costly, and as usual with
> asynchrony,
> not to tie too many threads of execution.
>
> Consider the following example of a toy echoing client:
>
>     WebSocket.Listener echo = new WebSocket.Listener() {
>
>         @Override
>         public void onOpen(WebSocket webSocket) {
>             webSocket.request(1);
>         }
>
>         @Override
>         public CompletionStage<?> onBinary(WebSocket webSocket,
>                                            ByteBuffer message,
>                                            MessagePart part)
>         {
>             boolean isLast = (part == WHOLE || part == LAST);
>             CompletableFuture<WebSocket> sent =
> webSocket.sendBinary(message, isLast);
>             sent.thenAcceptAsync(ws -> ws.request(1));
>             return sent;
>         }
>     };
>
> The next message will be requested after the previous one has been echoed
> back
> to the server. In this example there is zero copying from the API user's
> perspective.
>
> Practically speaking, could it be any more asynchronous?
>
> Mind you, going one-by-one (i.e. request(1) after consumption) is not the
> only
> possibility. Back-pressure provides quite flexible control over
> consumption.
> Have a look at java.util.concurrent.Flow. The spec for this type provides
> a very
> nice example (SampleSubscriber) of buffering.
>
> Chuck, if you still think this API is missing some crucial capability, I
> will
> kindly ask you to provide an example of an application that is possible to
> implement in a non-back-pressured model and is not possible in the proposed
> WebSocket API.
>
> Thanks,
> -Pavel
>
> ---------------------------------------------------
> [1] https://github.com/reactive-streams/reactive-streams-jvm/blob/
> 3716ba57f17e55ec414aed7ccb1530251e193b61/README.md#goals-design-and-scope
> [2] One does not have to signal the message has been processed in order to
>     receive the next message, if there are any. The signal is about
> handing over
>     the message contents, not about the back-pressure.
>
>


-- 
*James Roper*
*Senior Octonaut*

Lightbend <https://www.lightbend.com/> – Build reactive apps!
Twitter: @jroper <https://twitter.com/jroper>

Reply via email to