Hi Simone,

> On 5 Apr 2016, at 21:16, Simone Bordet <simone.bor...@gmail.com> wrote:
> 
> Hi,
> 
> Sure, the caller must not block.
> But there is no need to dispatch to achieve that when all code is
> non-blocking already.

Sorry, could you please explain this to me in more detail? I'm not sure I'm
following. 

Let's suppose we have a ByteBuffer to send. This ByteBuffer contains 1 MB of
data, the socket send buffer is 16 KB, the network is not particularly fast.
Suppose then the first pass fills the full buffer's 16Kb completely. So
WebSocket has 1,048,576 - 16,384 bytes left to write, but the buffer is still
full.

What should I do next in order to be nonblocking within this invocation?

>> Are you specifically saying that TEXT decoding, masking of the
>> payload, and sending should all happen in a single task?
> 
> Yes. These are all in-memory operations that do not block (even I/O is
> non-blocking).
> It's not only text decoding, it's everything: the binary variant of
> send(), and on the read side too.
> 
> No dispatches, no thread pools, no queues, no synchronization, no
> deadlocks, huge simplification of the implementation and much more CPU
> friendly.
> 
>> What the implementation does is to split these into separate tasks,
>> so that they can be performed in parallel for large messages.
> 
> For a single thread calling send(), they are not performed in
> parallel. They are performed sequentially across 3 dispatches.

What Chris probably meant (Chris surely knows better) is that while the first
message is being sent, the implementation can start decoding and masking the
second one. We don't have to wait for the current send to complete in order to
start preparing for the next one.

Moreover with a Text message a similar scenario might take place even within a
single message. If one gives the implementation a huge CharSequence, it will be
encoded in fragments of the size not greater than the size of ByteBuffers in the
internal pool.

> If you mean different threads calling send(), then you already have 
> parallelism.
> 
> If you mean the same thread calling send() in a tight loop, what you
> want is exactly to push back the sender by performing the tasks in the
> caller thread, rather than queuing and returning immediately.
> Imagine 8 cores and 8 tight loops: the threads in the pool will have
> little chance to run, causing the thread pool queue to grow
> indefinitely.

Sorry Simone, which thread pool are you talking about? In case you meant
SignalHandler, it doesn't have a queue. It's built on "opportunistic" repeating
of the same task over and over again.

>> Are you suggesting that this is not really worth it? At least for small
>> messages, or maybe not at all.  It would simplify the implementation
>> somewhat.
> 
> I don't see any reason to perform 3 dispatches for a single send() -
> not even 1 dispatch.
> I am suggesting to do zero dispatches, independently from the size of
> the message.
> 
> The only reason to perform a dispatch is to protect the implementation
> from application code that may block.
> This happens on the read side only, when the implementation is calling
> WebSocket.Listener implementations - application code can block
> forever.

May I ask in what thread you think WebSocket.Listener's invocations should be
dispatched?

On another topic. Let's consider many different WebSocket connections have been
established simultaneously. But the nature of these connection such that they
are all different in their speed. How would you suggest one should organize and
scale this?

Thanks a lot! I appreciate your comments, they give a lot of food for thought.

Reply via email to