On Sun, 09 Jun 2013 03:44:57 -0700, Νικόλαος Κούρας wrote: >>> Since 1 byte can hold up to 256 chars, why not utf-8 use 1-byte for >>> values up to 256? > >>Because then how do you tell when you need one byte, and when you need >>two? If you read two bytes, and see 0x4C 0xFA, does that mean two >>characters, with ordinal values 0x4C and 0xFA, or one character with >>ordinal value 0x4CFA? > > I mean utf-8 could use 1 byte for storing the 1st 256 characters. I > meant up to 256, not above 256.
But then you've used up all 256 possible bytes for storing the first 256 characters, and there aren't any left for use in multi-byte sequences. You need some means to distinguish between a single-byte character and an individual byte within a multi-byte sequence. UTF-8 does that by allocating specific ranges to specific purposes. 0x00-0x7F are single-byte characters, 0x80-0xBF are continuation bytes of multi-byte sequences, 0xC0-0xFF are leading bytes of multi-byte sequences. This scheme has the advantage of making UTF-8 non-modal, i.e. if a byte is corrupted, added or removed, it will only affect the character containing that particular byte; the encoder can re-synchronise at the beginning of the following character. OTOH, with encodings such as UTF-16, UTF-32 or ISO-2022, adding or removing a byte will result in desyncronisation, with all subsequent characters being corrupted. > A surrogate pair is like itting for example Ctrl-A, which means is a > combination character that consists of 2 different characters? Is this > what a surrogate is? a pari of 2 chars? A surrogate pair is a pair of 16-bit codes used to represent a single Unicode character whose code is greater than 0xFFFF. The 2048 codepoints from 0xD800 to 0xDFFF inclusive aren't used to represent characters, but "surrogates". Unicode characters with codes in the range 0x10000-0x10FFFF are represented in UTF-16 as a pair of surrogates. First, 0x10000 is subtracted from the code, giving a value in the range 0-0xFFFFF (20 bits). The top ten bits are added to 0xD800 to give a value in the range 0xD800-0xDBFF, while the bottom ten bits are added to 0xDC00 to give a value in the range 0xDC00-0xDFFF. Because the codes used for surrogates aren't valid as individual characters, scanning a string for a particular character won't accidentally match part of a multi-word character. > 'a' to be utf8 encoded needs 1 byte to be stored ? (since ordinal = 65) > 'α΄' to be utf8 encoded needs 2 bytes to be stored ? (since ordinal is > > 127 ) 'a chinese ideogramm' to be utf8 encoded needs 4 byte to be > stored ? (since ordinal > 65000 ) Most Chinese, Japanese and Korean (CJK) characters have codepoints within the BMP (i.e. <= 0xFFFF), so they only require 3 bytes in UTF-8. The codepoints above the BMP are mostly for archaic ideographs (those no longer in normal use), mathematical symbols, dead languages, etc. > The amount of bytes needed to store a character solely depends on the > character's ordinal value in the Unicode table? Yes. UTF-8 is essentially a mechanism for representing 31-bit unsigned integers such that smaller integers require fewer bytes than larger integers (subsequent revisions of Unicode cap the range of possible codepoints to 0x10FFFF, as that's all that UTF-16 can handle). -- http://mail.python.org/mailman/listinfo/python-list