> IMHO basic encoding information like name and id are not problem.
> The PQmblen() is big problem. Strange question: is PQmblen() really
> needful? I see it's used for result printing, but why backend not
> mark size of field (word) to result? If backend good knows size of
> data why not se
On Thu, Jul 11, 2002 at 06:30:48PM +0900, Tatsuo Ishii wrote:
> > > No, it's not a libpq problem, but more common "client/server" problem
> > > IMO. It's very hard to share dynamically created object (info)
> > > effectively between client and server.
> >
> > IMHO dynamic object will keep server
> > No, it's not a libpq problem, but more common "client/server" problem
> > IMO. It's very hard to share dynamically created object (info)
> > effectively between client and server.
>
> IMHO dynamic object will keep server and client must ask for wanted
> information to server.
I agree with
On Thu, Jul 11, 2002 at 05:52:18PM +0900, Tatsuo Ishii wrote:
> > > pg_char_to_encoding() is already in libpq. Or am I missing something?
> >
> > It works with encoding table (pg_enc2name_tbl) and it's compiled
> > into backend and client too. It means number of encoding is not possible
> >
> > pg_char_to_encoding() is already in libpq. Or am I missing something?
>
> It works with encoding table (pg_enc2name_tbl) and it's compiled
> into backend and client too. It means number of encoding is not possible
> change after compilation and you (user) can't add new encoding without
On Thu, Jul 11, 2002 at 05:26:01PM +0900, Tatsuo Ishii wrote:
> > Where/how is describe conversion between encoding id and encoding
> > name? (I maybe something overlook:-) I expect new encoding system
> > will extendable and encodings list not will hardcoded like now.
> > (extendable = add n
> Where/how is describe conversion between encoding id and encoding
> name? (I maybe something overlook:-) I expect new encoding system
> will extendable and encodings list not will hardcoded like now.
> (extendable = add new encoding without PostgreSQL rebuild)
User defined charsets(encodin
On Thu, Jul 11, 2002 at 03:37:49PM +0900, Tatsuo Ishii wrote:
> > > CREATE FUNCTION function_for_LATIN1_to_UTF-8(opaque, opaque, integer)
> > > RETURNS integer;
> > > CREAE CONVERSION myconversion FOR 'LATIN1' TO 'UNICODE' FROM
> > > function_for_LATIN1_to_UTF-8;
> >
> > Hmm, but it require defi
> > For example you want to define a function for LATIN1 to UNICODE conversion
> > function would look like:
> >
> > function_for_LATIN1_to_UTF-8(from_string opaque, to_string opaque, length
> > integer)
> > {
> > :
> > :
> > generic_function_using_iconv(from_str, to_str, "ISO-8859-1"
On Wed, 10 Jul 2002, Peter Eisentraut wrote:
> Sure. However, Tatsuo maintains that the customary Japanese character
> sets don't map very well with Unicode. Personally, I believe that this is
> an issue that should be fixed, not avoided, but I don't understand the
> issues well enough.
I hear
On Wed, 10 Jul 2002 08:21, Peter Eisentraut wrote:
> Hannu Krosing writes:
...
> > I would even reccommend going a step further and storing all 'national'
> > character sets in unicode.
>
> Sure. However, Tatsuo maintains that the customary Japanese character
> sets don't map very well with Unico
Hannu Krosing writes:
> Can't we do all collating in unicode and convert charsets A and B to and
> >from it ?
>
> I would even reccommend going a step further and storing all 'national'
> character sets in unicode.
Sure. However, Tatsuo maintains that the customary Japanese character
sets don't
Thomas Lockhart writes:
> An aside: I was thinking about this some, from the PoV of using our
> existing type system to handle this (as you might remember, this is an
> inclination I've had for quite a while). I think that most things line
> up fairly well to allow this (and having transaction-en
On Tue, Jul 09, 2002 at 10:07:11AM +0900, Tatsuo Ishii wrote:
> > > Use a simple wrap function.
> >
> > How knows this function to/from encoding?
>
> For example you want to define a function for LATIN1 to UNICODE conversion
> function would look like:
>
> function_for_LATIN1_to_UTF-8(from_st
> An aside: I was thinking about this some, from the PoV of using our
> existing type system to handle this (as you might remember, this is an
> inclination I've had for quite a while). I think that most things line
> up fairly well to allow this (and having transaction-enabled features
> may requ
On Tue, 2002-07-09 at 03:47, Tatsuo Ishii wrote:
> > An aside: I was thinking about this some, from the PoV of using our
> > existing type system to handle this (as you might remember, this is an
> > inclination I've had for quite a while). I think that most things line
> > up fairly well to allow
SQL99 allows on the fly encoding conversion:
CONVERT('aaa' USING myconv 'bbb')
So there could be more than 1 conversion for a paticlular encodings
pair. This lead to an ambiguity for "default" conversion used for the
frontend/backend automatic encoding conversion. Can we add a flag
indicating th
> > If so, what about the "coercibility" property?
> > The standard defines four distinct coercibility properties. So in
> > above my example, actually you are going to define 80 new types?
> > (also a collation could be either "PAD SPACE" or "NO PAD". So you
> > might have 160 new types).
>
> We
> > I've been think this for a while too. What about collation? If we add
> > new chaset A and B, and each has 10 collations then we are going to
> > have 20 new types? That seems overkill to me.
>
> Well, afaict all of the operations we would ask of a type we will be
> required to provide for ch
> I've been think this for a while too. What about collation? If we add
> new chaset A and B, and each has 10 collations then we are going to
> have 20 new types? That seems overkill to me.
Well, afaict all of the operations we would ask of a type we will be
required to provide for character sets
> > (1) a CONVERSION can only be dropped by the superuser or its owner.
>
> Okay ...
>
> > (2) a grant syntax for CONVERSION is:
>
> > GRANT USAGE ON CONVERSION to
> > { | GROUP | PUBLIC} [, ...]
>
> No, I don't think a conversion has any privileges of its own at all.
> You either ha
> When you say "We do not yet implement the SQL99 forms of character
> support", I think you mean the ability to specify per column (or even
> per string) charset. I don't think this would happen for 7.3(or 8.0
> whatever), but sometime later I would like to make it reality.
Right.
An aside: I w
> > Here is a proposal for new pg_conversion system table. Comments?
>
> I wonder if the encodings themselves shouldn't be represented in some
> system table, too. Admittedly, this is nearly orthogonal to the proposed
> system table, except perhaps the data type of the two encoding fields.
That
> > Use a simple wrap function.
>
> How knows this function to/from encoding?
For example you want to define a function for LATIN1 to UNICODE conversion
function would look like:
function_for_LATIN1_to_UTF-8(from_string opaque, to_string opaque, length
integer)
{
:
:
g
> If so, what about the "coercibility" property?
> The standard defines four distinct coercibility properties. So in
> above my example, actually you are going to define 80 new types?
> (also a collation could be either "PAD SPACE" or "NO PAD". So you
> might have 160 new types).
Well, yes I supp
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
>> I believe the spec just demands USAGE on the underlying function for
>> the TRANSLATE case, and I don't see why it should be different for
>> CONVERT. (In principle, if we didn't use a C-only API, you could
>> just call the underlying function directly;
> I believe the spec just demands USAGE on the underlying function for
> the TRANSLATE case, and I don't see why it should be different for
> CONVERT. (In principle, if we didn't use a C-only API, you could
> just call the underlying function directly; so there's little point
> in having protecti
> Tatsuo, it seems that we should use SQL99 terminology and commands where
> appropriate. We do not yet implement the SQL99 forms of character
> support, and I'm not sure if our current system is modeled to fit the
> SQL99 framework. Are you suggesting CREATE CONVERSION to avoid
> infringing on SQ
Tatsuo Ishii writes:
> Here is a proposal for new pg_conversion system table. Comments?
I wonder if the encodings themselves shouldn't be represented in some
system table, too. Admittedly, this is nearly orthogonal to the proposed
system table, except perhaps the data type of the two encoding f
Thomas Lockhart writes:
> Tatsuo, it seems that we should use SQL99 terminology and commands where
> appropriate. We do not yet implement the SQL99 forms of character
> support, and I'm not sure if our current system is modeled to fit the
> SQL99 framework. Are you suggesting CREATE CONVERSION to
...
> So I withdraw my earlier comment. But perhaps the syntax of the proposed
> command could be aligned with the CREATE TRANSLATION command.
Tatsuo, it seems that we should use SQL99 terminology and commands where
appropriate. We do not yet implement the SQL99 forms of character
support, and I
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> BTW, I wonder if we should invent new access privilege for conversion.
I believe the spec just demands USAGE on the underlying function for
the TRANSLATE case, and I don't see why it should be different for
CONVERT. (In principle, if we didn't use a C-o
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> CATALOG(pg_conversion)
> {
> NameDataconname;
> Oid connamespace;
> int4conowner;
> int4conforencoding;
> int4contoencoding;
> Oid conp
On Mon, Jul 08, 2002 at 09:59:44PM +0900, Tatsuo Ishii wrote:
> > On Sun, Jul 07, 2002 at 12:58:07PM +0200, Peter Eisentraut wrote:
> > > What would be really cool is if we could somehow reuse the conversion
> > > modules provided by the C library and/or the iconv library. For example,
> >
> On Sun, Jul 07, 2002 at 12:58:07PM +0200, Peter Eisentraut wrote:
> > What would be really cool is if we could somehow reuse the conversion
> > modules provided by the C library and/or the iconv library. For example,
> ^^^
>
> Very good point.
On Sun, Jul 07, 2002 at 12:58:07PM +0200, Peter Eisentraut wrote:
> What would be really cool is if we could somehow reuse the conversion
> modules provided by the C library and/or the iconv library. For example,
^^^
Very good point. Why use own
Here is a proposal for new pg_conversion system table. Comments?
/*-
*
* pg_conversion.h
*definition of the system "conversion" relation (pg_conversion)
*along with the relation's initial contents.
*
*
> So I withdraw my earlier comment. But perhaps the syntax of the proposed
> command could be aligned with the CREATE TRANSLATION command.
Ok. What about this?
CREATE CONVERSION
FOR
TO
FROM
DROP CONVERSION
BTW, I wonder if we should invent new access privilege f
Tom Lane writes:
> One thing that's really unclear to me is what's the difference between
> a and a , other than
> that they didn't provide a syntax for defining new conversions.
The standard has this messed up. In part 1, a form-of-use and an encoding
are two distinct things that can be appli
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> I guess you mix up SQL99's "trasnlate" and "convert".
No, I believe Peter has read the spec correctly. Further down they have
is a function for changing each character
of a given string according to some many-to-one or one-to-one
> Tatsuo Ishii writes:
>
> > > Also, is there anything in SQL99 that we ought to try to be
> > > compatible with?
> >
> > As far as I know there's no such an equivalent in SQL99.
>
> Sure:
>
> 11.34
I guess you mix up SQL99's "trasnlate" and "convert".
As far as I know, SQL99's "tr
Tatsuo Ishii writes:
> > Also, is there anything in SQL99 that we ought to try to be
> > compatible with?
>
> As far as I know there's no such an equivalent in SQL99.
Sure:
11.34
Function
Define a character translation.
Format
::=
> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > I am worried about that too. But if we stick a C-level API, how can we
> > define the argument data type suitable for C string? I don't see such
> > data types. Maybe you are suggesting that we should not use CREATE
> > FUNCTION?
>
> Well, you'd have
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> I am worried about that too. But if we stick a C-level API, how can we
> define the argument data type suitable for C string? I don't see such
> data types. Maybe you are suggesting that we should not use CREATE
> FUNCTION?
Well, you'd have to use the sa
> I see two different functions linked to from each pg_wchar_table
> entry... although perhaps those are associated with encodings
> not with conversions.
Yes. those are not directly associated with conversions.
> IIRC the existing conversion functions deal in C string pointers and
> lengths. I
Tatsuo Ishii wrote:
> Here is my proposal for new CREATE CONVERSION which makes it possible
> to define new encoding conversion mapping between two encodings on the
> fly.
>
> The background:
>
> We are getting having more and more encoding conversion tables. Up to
> now, they reach to 385352 so
> > CREATE CONVERSION
> >SOURCE
> >DESTINATION
> >FROM
>
> Doesn't a conversion currently require several support functions?
> How much overhead will you be adding to funnel them all through
> one function?
No, only one function is sufficient. What else do you think o
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
>> Doesn't a conversion currently require several support functions?
>> How much overhead will you be adding to funnel them all through
>> one function?
> No, only one function is sufficient. What else do you think of?
I see two different functions linked
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> Syntax proposal:
> CREATE CONVERSION
>SOURCE
>DESTINATION
>FROM
Doesn't a conversion currently require several support functions?
How much overhead will you be adding to funnel them all through
one function?
Basically I'd lik
49 matches
Mail list logo