On Sun, 12 Feb 2012 15:38:37 +1100, Chris Angelico wrote: > Everything that displays text to a human needs to translate bytes into > glyphs, and the usual way to do this conceptually is to go via > characters. Pretending that it's all the same thing really means > pretending that one byte represents one character and that each > character is depicted by one glyph. And that's doomed to failure, unless > everyone speaks English with no foreign symbols - so, no mathematical > notations.
Pardon me, but you can't even write *English* in ASCII. You can't say that it cost you £10 to courier your résumé to the head office of Encyclopædia Britanica to apply for the position of Staff Coördinator. (Admittedly, the umlaut on the second "o" looks a bit stuffy and old-fashioned, but it is traditional English.) Hell, you can't even write in *American*: you can't say that the recipe for the 20¢ WobblyBurger™ is © 2012 WobblyBurgerWorld Inc. ASCII truly is a blight on the world, and the sooner it fades into obscurity, like EBCDIC, the better. Even if everyone did change to speak ASCII, you still have all the historical records and documents and files to deal with. Encodings are not going away. -- Steven -- http://mail.python.org/mailman/listinfo/python-list