I have a small database (PgSQL 8.0, database encoding UTF8) that folks
are inserting into via a web form. The form itself is declared
ISO-8859-1 and the prior to inserting any data, pg_client_encoding is
set to LATIN1.
Wouldn't it be simpler to have the browser submit the form in utf8 ?
Most of the high-bit characters are correctly translated from LATIN1 to
UTF8. So for e-accent-egu I see the two-byte UTF8 value in the database.
Sometimes, in their wisdom, people cut'n'paste information out of MSWord
Argh.
and put that in the form. Instead of being mapped to 2-byte UTF8
high-bit equivalents, they are going into the database directly as
one-byte values > 127. That is, as illegal UTF8 values.
Sometimes you also get HTML entities in the mix. Who knows.
All my web forms are UTF-8 back to back, it just works. Was I lucky ?
Normally postgres rejects illegal UTF8 values, you wouldn't be able to
insert them...
When I try to dump'n'restore this database into PgSQL 8.2, my data can't
made the transit.
Firstly, is this "kinda sorta" encoding handling expected in 8.0, or did
I do something wrong?
Duh ? pg isn't supposed to accept bad unicode data... something
suspicious is going on.
Besides, if it was dumped, it should be reloadable... did pg_dump use a
funky encoding ?
Secondly, anyone know any useful tools to pipe a stream through to strip
out illegal UTF8 bytes, so I can pipe my dump through that rather than
hand editing it?
Yes, use iconv (see man page), it can do this for you quite easily. It's
probably already installed on your system.
Be warned, though, that illegal multibyte characters eat quotes at night
while you aren't looking... unterminated strings are a pain.
You could also load your database with C locale, and have a script select
from the records you wish to convert, and update the rows.
Python has very good Unicode support, should be easy to make such a
script.
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster