> Isn't this essentially what the MULE internal encoding is?

No. MULE is not powerfull enough and overly complicated to deal with
different encodings (character sets).

>> Currently there's no such an universal encoding in the universe, I
>> think the only way is, inventing it by ourselves.
> 
> This sounds like a terrible idea. In the future people are only going
> to want more advanced text functions, regular expressions, indexing and
> making encodings that don't exist anywhere else seems like a way to
> make a lot of work for little benefit.

That is probably a misunderstanding. We don't need to modify existing
text handling modules such as text functions, regular expressions,
indexing etc. We just convert from the "universal" encoding X to the
original encoding before calling them. The process is pretty easy and
fast because it just requires skipping "encoding identifier" and
"encoding length" part.

Basically the encoding X should be used for lower layer modules of
PostgreSQL and higher layer module such as living in
src/backend/utils/adt should not aware it.

> A better idea seems to me is to (if postgres is configured properly)
> embed the non-round-trippable characters in the custom character part
> of the unicode character set. In other words, adjust the mappings
> tables on demand and voila.

Using Unicode requires overhead for encoding conversion because it
needs to look up mapping tables. That will be a huge handicap for
large data and that I want to avoid in the first place.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to