Strictly speaking, not sending a Pong for every Ping is not against the protocol semantics [1]: If an endpoint receives a Ping frame and has not yet sent Pong frame(s) in response to previous Ping frame(s), the endpoint MAY elect to send a Pong frame for only the most recently processed Ping frame.
Yes, that is true but it is also tricky. If the impl can guarantee that a Pong frame frame is sent for the *most recent* Ping frame, then it is ok. But not responding to the most recent Ping frame would cause problems. On Thu, May 5, 2016 at 4:47 AM, Pavel Rappo <pavel.ra...@oracle.com> wrote: > > > On 5 May 2016, at 00:20, Jitendra Kotamraju <jit...@gmail.com> wrote: > > > > * I see that there is an issue for autoponging. May be this falls under > it. The default impl of onPing() doesn't send PONG for *every* received > PING. > > Yes, that's correct. The default implementation sends a Pong in response > to a > received Ping only if all the previous Pongs have been sent. > > > Since it is against the protocol semantics, it forces all applications > to override the default onPing() impl. Is that right ? > > Strictly speaking, not sending a Pong for every Ping is not against the > protocol > semantics [1]: > > If an endpoint receives a Ping frame and has not yet sent Pong > frame(s) in response to previous Ping frame(s), the endpoint MAY > elect to send a Pong frame for only the most recently processed Ping > frame. > > > It seems to me that the basic issue here is that API allows only one > outstanding send operation. That is also limiting for applications as > applications need to do more work. > > Yes, you're right saying that one outstanding write is an issue here and > we're > planning to go away from it [2]. For these same reasons you've stated. But > that > doesn't mean the default "autoponging" behaviour will change. It has its > purpose. It protects the user from possible bloating of the outgoing queue. > > Imagine if the implementation would send a pong for every ping it receives > and a > rate with which pongs are sent is lower than a rate with which pings are > received. That means that your outgoing queue will grow uncontrollably. And > what's worse is that you won't even know about it. Because pings are not > communicated to you (as that was the whole point of doing it > automatically). > > Now, that could only happen if you don't correlate message requests with > completion of send operations. But it is possible, and could be done > easily. > > The API is designed to provide a reactive way of working. In other words, > you > get messages only if you ask for them. This way you fully control resources > exhaustion on your side. But replying to every single ping opens a bypass > to > that mechanism. > > I think we should put a limit on the maximum number of queued automatic > pongs. > I'm not saying the limit should be strictly 1, but it should be a > reasonable > value. Think of it as of a pressure-relief safety valve. > > > * Since API allows these callback methods may be called by different > threads, it is application responsibility to manage memory visibility and > thread safety etc, right ? > > No, you don't need to manage memory visibility and there's no need to worry > about concurrent executions, as listener's methods are never run > concurrently. > The API guarantees you all that: > > * <p> All methods are invoked in a sequential (and > * <a > href="../../../java/util/concurrent/package-summary.html#MemoryVisibility"> > * happens-before</a>) order, one after another, possibly by different > * threads. > > That's, of course, unless you decide to execute some task asynchronously > from > your listener. In which case you'll have to take care of all of the above > by > yourself. > > > -------------------------------------------------------------------------------- > [1] https://tools.ietf.org/html/rfc6455#section-5.5.3 > [2] https://bugs.openjdk.java.net/browse/JDK-8155621 (WebSocket API > tasks, item 12) > >