On 08/29/2012 07:40 AM, wxjmfa...@gmail.com wrote:
> <snip>

> Forget Python and all these benchmarks. The problem is on an other
> level. Coding schemes, typography, usage of characters, ... For a
> given coding scheme, all code points/characters are equivalent.
> Expecting to handle a sub-range in a coding scheme without shaking
> that coding scheme is impossible. If a coding scheme does not give
> satisfaction, the only valid solution is to create a new coding
> scheme, cp1252, mac-roman, EBCDIC, ... or the interesting "TeX" case,
> where the "internal" coding depends on the fonts! Unicode (utf***), as
> just one another coding scheme, does not escape to this rule. This
> "Flexible String Representation" fails. Not only it is unable to stick
> with a coding scheme, it is a mixing of coding schemes, the worst of
> all possible implementations. jmf 

Nonsense.  The discussion was not about an encoding scheme, but an
internal representation.  That representation does not change the
programmer's interface in any way other than performance (cpu and memory
usage).   Most of the rest of your babble is unsupported opinion.

Plonk.



-- 

DaveA

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to