Tom Lane wrote:
Andrew Dunstan <and...@dunslane.net> writes:
This isn't about the number of bytes, but about whether or not we should count characters encoded as two or more combined code points as a single char or not.

It's really about whether we should support non-canonical encodings.
AFAIK that's a hack to cope with implementations that are restricted
to UTF-16, and we should Just Say No.  Clients that are sending these
things converted to UTF-8 are in violation of the standard.

I don't believe that the standard forbids the use of combining chars at all. RFC 3629 says:

  Security may also be impacted by a characteristic of several
  character encodings, including UTF-8: the "same thing" (as far as a
  user can tell) can be represented by several distinct character
  sequences.  For instance, an e with acute accent can be represented
  by the precomposed U+00E9 E ACUTE character or by the canonically
  equivalent sequence U+0065 U+0301 (E + COMBINING ACUTE).  Even though
  UTF-8 provides a single byte sequence for each character sequence,
  the existence of multiple character sequences for "the same thing"
  may have security consequences whenever string matching, indexing,
  searching, sorting, regular expression matching and selection are
  involved.  An example would be string matching of an identifier
  appearing in a credential and in access control list entries.  This
  issue is amenable to solutions based on Unicode Normalization Forms,
  see [UAX15].


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to