On Tue, May 1, 2012 at 12:22 PM, Andrew Dunstan wrote:
> Second, RFC 4627 is absolutely clear: a valid JSON value can only be an
> object or an array, so this thing about converting arbitrary datum values to
> JSON is a fantasy. If anything, we should adjust the JSON input routines to
> disallow a
On Tue, May 1, 2012 at 8:02 AM, Hannu Krosing wrote:
> Hi hackers
>
> After playing around with array_to_json() and row_to_json() functions a
> bit it I have a question - why do we even have 2 variants *_to_json()
Here's the discussion where that decision was made:
http://archives.postgresql.org
libpq has functions for escaping values in SQL commands
(PQescapeStringConn, PQescapeByteaConn, and the new PQescapeLiteral),
and it supports parameterizing queries with PQexecParams. But it does
not (to my knowledge) have functions for escaping values for COPY
FROM.
COPY FROM is useful for inser
On Tue, Jan 31, 2012 at 1:29 PM, Abhijit Menon-Sen wrote:
> At 2012-01-31 12:04:31 -0500, robertmh...@gmail.com wrote:
>>
>> That fails to answer the question of what we ought to do if we get an
>> invalid sequence there.
>
> I think it's best to categorically reject invalid surrogates as early as
On Sat, Jan 14, 2012 at 3:06 PM, Andrew Dunstan wrote:
> Second, what should be do when the database encoding isn't UTF8? I'm
> inclined to emit a \u escape for any non-ASCII character (assuming it
> has a unicode code point - are there any code points in the non-unicode
> encodings that don't
I wrote an array_to_json function during GSoC 2010:
http://git.postgresql.org/gitweb/?p=json-datatype.git;a=blob;f=json_io.c#l289
It's not exposed as a procedure called array_to_json: it's part of the
to_json function, which decides what to do based on the argument type.
- Joey
--
Sent vi
This may be ambitious, but it'd be neat if PostgreSQL supported
parameterizable types. For example, suppose a contrib module defines
a "pair" type. It could be used as follows:
CREATE TABLE my_table (
coord pair(float, float)
);
The "pair" module could define functions like thes
On Fri, Dec 16, 2011 at 8:52 AM, Robert Haas wrote:
> But I think the important point is that this is an obscure corner case. Let
> me say that one
more time: obscure corner case!
+1
> The only reason JSON needs to care about this at all is that it allows
> \u1234 to mean Unicode code point 0x
On Mon, Dec 5, 2011 at 3:12 PM, Bruce Momjian wrote:
> Where are we with adding JSON for Postgres 9.2? We got bogged down in
> the data representation last time we discussed this.
We should probably have a wiki page titled "JSON datatype status" to
help break the cycle we're in:
* Someone asks
On Mon, Jul 25, 2011 at 1:05 AM, Joey Adams wrote:
> Should we mimic IEEE floats and preserve -0 versus +0 while treating
> them as equal? Or should we treat JSON floats like numeric and
> convert -0 to 0 on input? Or should we do something else? I think
> converting -0 to 0 wo
On Sun, Jul 24, 2011 at 2:19 PM, Florian Pflug wrote:
> The downside being that we'd then either need to canonicalize in
> the equality operator, or live with either no equality operator or
> a rather strange one.
It just occurred to me that, even if we sort object members, texteq
might not be a
;
On Sun, Jul 24, 2011 at 2:19 PM, Florian Pflug wrote:
> On Jul24, 2011, at 05:14 , Robert Haas wrote:
>> On Fri, Jul 22, 2011 at 10:36 PM, Joey Adams
>> wrote:
>>> ... Fortunately, JSON's definition of a
>>> "number" is its decimal syntax, so the
Also, should I forbid the escape \u (in all database encodings)?
Pros:
* If \u is forbidden, and the server encoding is UTF-8, then
every JSON-wrapped string will be convertible to TEXT.
* It will be consistent with the way PostgreSQL already handles text,
and with the decision to use
On Fri, Jul 22, 2011 at 7:12 PM, Robert Haas wrote:
> Hmm. That's tricky. I lean mildly toward throwing an error as being
> more consistent with the general PG philosophy.
I agree. Besides, throwing an error on duplicate keys seems like the
most logical thing to do. The most compelling reason
I think I've decided to only allow escapes of non-ASCII characters
when the database encoding is UTF8. For example, $$"\u2013"$$::json
will fail if the database encoding is WIN1252, even though WIN1252 can
encode U+2013 (EN DASH). This may be somewhat draconian, given that:
* SQL_ASCII can othe
On Wed, Jul 20, 2011 at 6:49 AM, Florian Pflug wrote:
> Hm, I agree that we need to handle \u escapes in JSON input.
> We won't ever produce them during output though, right?
We could, to prevent transcoding errors if the client encoding is
different than the server encoding (and neither is S
On Wed, Jul 20, 2011 at 12:32 AM, Robert Haas wrote:
>> Thanks for the input. I'm leaning in this direction too. However, it
>> will be a tad tricky to implement the conversions efficiently, ...
>
> I'm a bit confused, because I thought what I was talking about was not
> doing any conversions in
Forwarding because the mailing list rejected the original message.
-- Forwarded message --
From: Joey Adams
Date: Tue, Jul 19, 2011 at 11:23 PM
Subject: Re: Initial Review: JSON contrib modul was: Re: [HACKERS]
Another swing at JSON
To: Alvaro Herrera
Cc: Florian Pflug , Tom
On Mon, Jul 18, 2011 at 7:36 PM, Florian Pflug wrote:
> On Jul19, 2011, at 00:17 , Joey Adams wrote:
>> I suppose a simple solution would be to convert all escapes and
>> outright ban escapes of characters not in the database encoding.
>
> +1. Making JSON work like TEXT when
On Mon, Jul 18, 2011 at 3:19 PM, Tom Lane wrote:
> BTW, could the \u problem be finessed by leaving such escapes in
> source form?
Yes, it could. However, it doesn't solve the problem of comparison
(needed for member lookup), which requires canonicalizing the strings
to be compared.
Here's
On Mon, Jul 4, 2011 at 10:22 PM, Joseph Adams
wrote:
> I'll try to submit a revised patch within the next couple days.
Sorry this is later than I said.
I addressed the issues covered in the review. I also fixed a bug
where "\u0022" would become """, which is invalid JSON, causing an
assertion f
21 matches
Mail list logo