On Mar 20, 2006, at 12:16 PM, [LoN]Kamikaze wrote:
If you make sure that your data goes into the database in a binary
safe
form (look for escape methods supplied by your favourite programming
language) it doesn't matter how the database is encoded, because you
will always get the data back the way you put it in.
I expect that to happen. What I'm more curious about is the
collating speed. Ie, how fast are the sorting and string comparison
functions. The clam here is that in *BSD these are somehow not
fast. I'm not sure if that is a BSD issue or a Postgres issue for
not taking advantage of the BSD functions properly.
Vivek Khera wrote:
Reading thru one of the postgres mailing lists regarding which
character
encoding to use for a database, someone chimed in and claimed this:
Umm, you should choose an encoding supported by your platform and
the
locales you use. For example, UTF-8 is a bad choice on *BSD because
there is no collation support for UTF-8 on those platforms. On
Linux/Glibc UTF-8 is well supported but you need to make sure the
locale you initdb with is a UTF-8 locale. By and large postgres
correctly autodetects the encoding from the locale.
Is this an accurate claim for FreeBSD? I need to have a UTF-8
encoded
database in an upcoming project, and performance is always a concern.
Thanks.
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-
[EMAIL PROTECTED]"
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"