Thanks, Pavel, for the additional information. Text payloads will be easily handled. ByteBuffers not so much apparently. Major issues I see now: they can't grow and all the methods I'd want to use are abstract, optional or both. So each implementation I run on will have to be tested to see if the JVM even implements the methods...such basic things as get() and put(). So I'm researching to find a good buffering mechanism. If you have suggestion (or anyone else lurking here) I'm wide open to suggestions how best to do buffering for ByteBuffers so that serialized objects can be deserialized. Didn't have this issue with jdk8 WebSocket since I only got complete messages to decode. Backpressure was not an issue in my development environment but I have no problem understanding why it might be needed.
With the test database I'm currently working with a typical payload will be about 10k in size (many smaller but some larger). I'm going to have to devise a GOOD way to buffer these partial messages into a whole buffer/stream before I submit to an ObjectInputStream to deserialize the objects. Of course when I connect to a real, live database a payload of 100k or more will not be an unusual event. I am well aware I'm not the sharpest knife in the drawer so welcome strategies of which objects to use, and how, to accomplish the task of putting Humpty Dumpty back together again after having been broken into many pieces of ByteBuffer. CD On Wed, Feb 14, 2018 at 10:38 AM, Pavel Rappo <pavel.ra...@oracle.com> wrote: > > It does however seem that optimization-wise there is not much to be gained > from > doing this. As different parts of a whole message (i.e. payloads of WebSocket > frames) do not occupy contiguous parts of the underlying memory. In order to > be > concatenated, payload pieces would need to be cut out of their frames first. > Which means copying. In this context it might have been better if we decided > to > use ByteBuffer[] instead of ByteBuffer in the first place. But that seemed > like > an overkill back then. >