On 08.02.2016 19:41, Jason Ricles wrote:
I have an application that sends binary websocket messages between a
class and the web application using a websocket server written in
java.
The data being sent from the java class is encoded in a binary buffer
with the bytes in ISO8859_1. However, when I receive the bytes on the
websocket server and the web application end they are junk (such as
-121, -116, etc.) and not encoded the correct way that they need to
be.
I was reading that this might be caused by something being set in my
websocket server and web application to use UTF-8 for the default and
not ISO8859_1.
Is there any way I can change my websocket server and my web
application which uses JavaScript to use ISO8859_1 instead of UTF-8?
Now is it Java, or JavaScript ? (earlier you say "sent from the java class"..)
For a proper "correct" solution, the client sending text data to the server should tell
the server what character set/encoding is used for that data (via some kind of "header"
for example). This way, the server could always read that text data and decode it in the
proper way.
If you are /sure/ that this server socket, now and in the future, will only ever receive
text data from this particular version of your client java/javascript code, and that text
will always be encoded as iso-8859-1, then you should at least make sure that the server
code which is reading and decoding this data, does it as iso-8859-1, which is not the
default character set for java.
But by doing so, you are only moving the problem further in the future, because as far as
it looks right now, the usage of Unicode/UTF-8 will increase, and the usage of iso-8859-x
character sets will decrease over time.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org