Hopefully someone else can answer why there is a difference in the output of the str function. I suspect in ClojureScript's case, it is simply the default behavior to use \x and two hex digits to display a character in a string with a code point in the range 128 through 255, inherited from JavaScript, whereas in Clojure/JVM it uses the currently specified character set encoding of the underlying JVM.
As far as Clojure/JVM being able to read strings encoded in this way, there is an enhancement request ticket CLJ-1025 open, and there is recent discussion on the Clojure Dev group about whether this enhancement should be included in the yet-to-be-released Clojure 1.5: http://dev.clojure.org/jira/browse/CLJ-1025 Andy On Oct 18, 2012, at 4:12 AM, Henrik Mohr wrote: > Hi there! > > I'm wondering why ClojureScript seems to handle international characters > differently from Clojure. > > Simple example in Clojure (= my preferred behaviour): > user=> (str "ø") > "ø" > > The same example in ClojureScript: > ClojureScript:cljs.user> #_=> (str 'ø') > "\xF8'" > > Can anyone explain to me why ClojureScript behaves like that? > > I need to send strings from ClojureScript to a remote service, so I need the > output from ClojureScript to be straight UTF-8 encoded strings. > > Because when the (Clojure based) remote service receives the string from > ClojureScript it doesn't decode it correctly with read-string: > Exception: java.lang.RuntimeException: Unsupported escape character: \x > > Anyone? > > Thanks. > > Best regards, > Henrik -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en