On Jul25, 2011, at 02:03 , Florian Pflug wrote:
> On Jul25, 2011, at 00:48 , Joey Adams wrote:
>> Should we follow the JavaScript standard for rendering numbers (which
>> my suggestion approximates)? Or should we use the shortest encoding
>> as Florian suggests?
>
> In the light of the above, con
On Jul25, 2011, at 07:35 , Joey Adams wrote:
> On Mon, Jul 25, 2011 at 1:05 AM, Joey Adams
> wrote:
>> Should we mimic IEEE floats and preserve -0 versus +0 while treating
>> them as equal? Or should we treat JSON floats like numeric and
>> convert -0 to 0 on input? Or should we do something el
On Mon, Jul 25, 2011 at 1:05 AM, Joey Adams wrote:
> Should we mimic IEEE floats and preserve -0 versus +0 while treating
> them as equal? Or should we treat JSON floats like numeric and
> convert -0 to 0 on input? Or should we do something else? I think
> converting -0 to 0 would be a bad idea
On Sun, Jul 24, 2011 at 2:19 PM, Florian Pflug wrote:
> The downside being that we'd then either need to canonicalize in
> the equality operator, or live with either no equality operator or
> a rather strange one.
It just occurred to me that, even if we sort object members, texteq
might not be a
On Jul25, 2011, at 00:48 , Joey Adams wrote:
> On Sun, Jul 24, 2011 at 2:19 PM, Florian Pflug wrote:
>> On Jul24, 2011, at 05:14 , Robert Haas wrote:
>>> On Fri, Jul 22, 2011 at 10:36 PM, Joey Adams
>>> wrote:
... Fortunately, JSON's definition of a
"number" is its decimal syntax, so t
On Sat, Jul 23, 2011 at 11:14 PM, Robert Haas wrote:
> I doubt you're going to want to reinvent TOAST, ...
I was thinking about making it efficient to access or update
foo.a.b.c.d[1000] in a huge JSON tree. Simply TOASTing the varlena
text means we have to unpack the entire datum to access and u
On Jul24, 2011, at 05:14 , Robert Haas wrote:
> On Fri, Jul 22, 2011 at 10:36 PM, Joey Adams
> wrote:
>> Interesting. This leads to a couple more questions:
>>
>> * Should the JSON data type (eventually) have an equality operator?
>
> +1.
+1.
>> * Should the JSON input function alphabetize
On Fri, Jul 22, 2011 at 10:36 PM, Joey Adams wrote:
> Interesting. This leads to a couple more questions:
>
> * Should the JSON data type (eventually) have an equality operator?
+1.
> * Should the JSON input function alphabetize object members by key?
I think it would probably be better if i
Also, should I forbid the escape \u (in all database encodings)?
Pros:
* If \u is forbidden, and the server encoding is UTF-8, then
every JSON-wrapped string will be convertible to TEXT.
* It will be consistent with the way PostgreSQL already handles text,
and with the decision to use
On Fri, Jul 22, 2011 at 7:12 PM, Robert Haas wrote:
> Hmm. That's tricky. I lean mildly toward throwing an error as being
> more consistent with the general PG philosophy.
I agree. Besides, throwing an error on duplicate keys seems like the
most logical thing to do. The most compelling reason
On Jul23, 2011, at 00:04 , Joey Adams wrote:
> I think I've decided to only allow escapes of non-ASCII characters
> when the database encoding is UTF8. For example, $$"\u2013"$$::json
> will fail if the database encoding is WIN1252, even though WIN1252 can
> encode U+2013 (EN DASH). This may be s
On Jul23, 2011, at 01:12 , Robert Haas wrote:
> On Fri, Jul 22, 2011 at 6:04 PM, Joey Adams
> wrote:
>> On another matter, should the JSON type guard against duplicate member
>> keys? The JSON RFC says "The names within an object SHOULD be
>> unique," meaning JSON with duplicate members can be c
On Fri, Jul 22, 2011 at 7:16 PM, Jan Urbański wrote:
> On 23/07/11 01:12, Robert Haas wrote:
>> On Fri, Jul 22, 2011 at 6:04 PM, Joey Adams
>> wrote:
>>> On another matter, should the JSON type guard against duplicate member
>>> keys? The JSON RFC says "The names within an object SHOULD be
>>>
On 23/07/11 01:12, Robert Haas wrote:
> On Fri, Jul 22, 2011 at 6:04 PM, Joey Adams
> wrote:
>> On another matter, should the JSON type guard against duplicate member
>> keys? The JSON RFC says "The names within an object SHOULD be
>> unique," meaning JSON with duplicate members can be considere
On Fri, Jul 22, 2011 at 6:04 PM, Joey Adams wrote:
> On another matter, should the JSON type guard against duplicate member
> keys? The JSON RFC says "The names within an object SHOULD be
> unique," meaning JSON with duplicate members can be considered valid.
> JavaScript interpreters (the ones I
I think I've decided to only allow escapes of non-ASCII characters
when the database encoding is UTF8. For example, $$"\u2013"$$::json
will fail if the database encoding is WIN1252, even though WIN1252 can
encode U+2013 (EN DASH). This may be somewhat draconian, given that:
* SQL_ASCII can othe
On Wed, Jul 20, 2011 at 6:49 AM, Florian Pflug wrote:
> Hm, I agree that we need to handle \u escapes in JSON input.
> We won't ever produce them during output though, right?
We could, to prevent transcoding errors if the client encoding is
different than the server encoding (and neither is S
On Jul20, 2011, at 06:40 , Joey Adams wrote:
> On Wed, Jul 20, 2011 at 12:32 AM, Robert Haas wrote:
>>> Thanks for the input. I'm leaning in this direction too. However, it
>>> will be a tad tricky to implement the conversions efficiently, ...
>>
>> I'm a bit confused, because I thought what I
On Wed, Jul 20, 2011 at 12:32 AM, Robert Haas wrote:
>> Thanks for the input. I'm leaning in this direction too. However, it
>> will be a tad tricky to implement the conversions efficiently, ...
>
> I'm a bit confused, because I thought what I was talking about was not
> doing any conversions in
On Tue, Jul 19, 2011 at 9:03 PM, Joey Adams wrote:
> On Mon, Jul 18, 2011 at 7:36 PM, Florian Pflug wrote:
>> On Jul19, 2011, at 00:17 , Joey Adams wrote:
>>> I suppose a simple solution would be to convert all escapes and
>>> outright ban escapes of characters not in the database encoding.
>>
>>
Marc says it is now fixed.
---
>
> >
> > -- Forwarded message --
> > From: Joey Adams
> > Date: Tue, Jul 19, 2011 at 11:23 PM
> > Subject: Re: Initial Review: JSON contrib modul was: Re: [HACKERS]
> > Another swing at JSON
> > To: Alvaro Herrera
> >
om: Joey Adams
> Date: Tue, Jul 19, 2011 at 11:23 PM
> Subject: Re: Initial Review: JSON contrib modul was: Re: [HACKERS]
> Another swing at JSON
> To: Alvaro Herrera
> Cc: Florian Pflug , Tom Lane , Robert
> Haas , Bernd Helmle ,
> Dimitri Fontaine , David Fetter
> , Josh
Forwarding because the mailing list rejected the original message.
-- Forwarded message --
From: Joey Adams
Date: Tue, Jul 19, 2011 at 11:23 PM
Subject: Re: Initial Review: JSON contrib modul was: Re: [HACKERS]
Another swing at JSON
To: Alvaro Herrera
Cc: Florian Pflug , Tom
Excerpts from Joey Adams's message of mar jul 19 21:03:15 -0400 2011:
> On Mon, Jul 18, 2011 at 7:36 PM, Florian Pflug wrote:
> > On Jul19, 2011, at 00:17 , Joey Adams wrote:
> >> I suppose a simple solution would be to convert all escapes and
> >> outright ban escapes of characters not in the dat
On Mon, Jul 18, 2011 at 7:36 PM, Florian Pflug wrote:
> On Jul19, 2011, at 00:17 , Joey Adams wrote:
>> I suppose a simple solution would be to convert all escapes and
>> outright ban escapes of characters not in the database encoding.
>
> +1. Making JSON work like TEXT when it comes to encoding i
On Jul19, 2011, at 00:17 , Joey Adams wrote:
> I suppose a simple solution would be to convert all escapes and
> outright ban escapes of characters not in the database encoding.
+1. Making JSON work like TEXT when it comes to encoding issues
makes this all much simpler conceptually. It also avoids
On Mon, Jul 18, 2011 at 3:19 PM, Tom Lane wrote:
> BTW, could the \u problem be finessed by leaving such escapes in
> source form?
Yes, it could. However, it doesn't solve the problem of comparison
(needed for member lookup), which requires canonicalizing the strings
to be compared.
Here's
On Mon, Jul 18, 2011 at 3:19 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Fri, Jul 15, 2011 at 3:56 PM, Joey Adams
>> wrote:
>>> I'm having a really hard time figuring out how the JSON module should
>>> handle non-Unicode character sets.
>
>> But, again, why not just forget about transcoding
Robert Haas writes:
> On Fri, Jul 15, 2011 at 3:56 PM, Joey Adams
> wrote:
>> I'm having a really hard time figuring out how the JSON module should
>> handle non-Unicode character sets.
> But, again, why not just forget about transcoding and define it as
> "JSON, if you happen to be using utf-8
On Fri, Jul 15, 2011 at 3:56 PM, Joey Adams wrote:
> On Mon, Jul 4, 2011 at 10:22 PM, Joseph Adams
> wrote:
>> I'll try to submit a revised patch within the next couple days.
>
> Sorry this is later than I said.
>
> I addressed the issues covered in the review. I also fixed a bug
> where "\u0022
On Mon, Jul 4, 2011 at 10:22 PM, Joseph Adams
wrote:
> I'll try to submit a revised patch within the next couple days.
Sorry this is later than I said.
I addressed the issues covered in the review. I also fixed a bug
where "\u0022" would become """, which is invalid JSON, causing an
assertion f
On 7/4/11 7:22 PM, Joseph Adams wrote:
> I'll try to submit a revised patch within the next couple days.
So? New patch?
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http:
Thanks for reviewing my patch!
On Mon, Jul 4, 2011 at 7:10 AM, Bernd Helmle wrote:
> +comment = 'data type for storing and manipulating JSON content'
>
> I'm not sure, if "manipulating" is a correct description. Maybe i missed it,
> but i didn't see functions to manipulate JSON strings directly,
--On 18. Juni 2011 12:29:38 +0200 Bernd Helmle wrote:
Similar problems occur with a couple other modules I tried (hstore,
intarray).
Hmm, works for me. Seems you have messed up your installation in some way
(build against current -HEAD but running against a 9.1?).
I'm going to review in th
--On 17. Juni 2011 18:06:58 -0400 Joseph Adams
wrote:
Done. Note that this module builds, tests, and installs successfully
with USE_PGXS=1. However, building without USE_PGXS=1 produces the
following:
CREATE EXTENSION json;
ERROR: incompatible library "/usr/lib/postgresql/json.s
On Fri, Jun 17, 2011 at 2:29 AM, Bernd Helmle wrote:
> Joseph, are you able to remove the compatibility code for this CF?
Done. Note that this module builds, tests, and installs successfully
with USE_PGXS=1. However, building without USE_PGXS=1 produces the
following:
CREATE EXTENSION json
--On 16. Juni 2011 17:38:07 -0400 Tom Lane wrote:
After reading Joseph's comment upthread, I don't see any consensus
wether the existing pre-9.1 support is required or even desired. Maybe
i missed it, but do we really expect an extension (or contrib module)
to be backwards compatible to earli
Bernd Helmle writes:
> After reading Joseph's comment upthread, I don't see any consensus
> wether the existing pre-9.1 support is required or even desired. Maybe
> i missed it, but do we really expect an extension (or contrib module)
> to be backwards compatible to earlier major releases, when sh
--On 29. März 2011 21:15:11 -0400 Joseph Adams
wrote:
Thanks. I applied a minor variation of this trick to the JSON module,
so now it builds/installs/tests cleanly on both REL8_4_0 and HEAD
(though it won't work if you copy contrib/json into a pre-9.1
PostgreSQL source directory and type `
Tom Lane writes:
> So, I'm interested in trying to improve this, but it looks like a
> research project from here.
True: I don't have a baked solution that we would just need to
apply. The simplest idea I can think of is forcing make install before
to build contribs so that PGXS works “normally”
Tom Lane writes:
> Dimitri Fontaine writes:
>> This and removing module_pathname in the control files to just use
>> $libdir/contrib in the .sql files. That would set a better example to
>> people who want to make their own extensions, as the general case is
>> that those will not get into contr
Dimitri Fontaine writes:
> This and removing module_pathname in the control files to just use
> $libdir/contrib in the .sql files. That would set a better example to
> people who want to make their own extensions, as the general case is
> that those will not get into contrib.
I'm not sure it's a
Andrew Dunstan writes:
> On 03/30/2011 12:29 PM, Dimitri Fontaine wrote:
>> Andrew Dunstan writes:
>>> I think we're pretty much down to only fixing bugs now, for 9.1, and this
>>> isn't a bug, however inconvenient it might be.
>> It's not just inconvenient, it's setting a bad example for people
Excerpts from Dimitri Fontaine's message of mié mar 30 15:05:36 -0300 2011:
> Andrew Dunstan writes:
> > I don't have any objection to putting some comments in the contrib Makefiles
> > telling people to use PGXS, but I don't think that at this stage of the
> > cycle we can start work on something
Andrew Dunstan writes:
> I don't have any objection to putting some comments in the contrib Makefiles
> telling people to use PGXS, but I don't think that at this stage of the
> cycle we can start work on something that so far is just an idea from the
> top of my head.
I might be mistaken on how
On 03/30/2011 12:29 PM, Dimitri Fontaine wrote:
Andrew Dunstan writes:
I think we're pretty much down to only fixing bugs now, for 9.1, and this
isn't a bug, however inconvenient it might be.
It's not just inconvenient, it's setting a bad example for people to
work on their own extensions.
Andrew Dunstan writes:
> I think we're pretty much down to only fixing bugs now, for 9.1, and this
> isn't a bug, however inconvenient it might be.
It's not just inconvenient, it's setting a bad example for people to
work on their own extensions. It's more than unfortunate. I will
prepare a doc
On 03/30/2011 11:37 AM, Dimitri Fontaine wrote:
Andrew Dunstan writes:
Maybe we could teach pg_config to report appropriate settings for
uninstalled source via an environment setting. Without changing pg_config it
looks like we have a chicken and egg problem.
(If my suggestion is right, this
Andrew Dunstan writes:
> Maybe we could teach pg_config to report appropriate settings for
> uninstalled source via an environment setting. Without changing pg_config it
> looks like we have a chicken and egg problem.
>
> (If my suggestion is right, this would probably be a good beginner TODO.)
I
On 03/30/2011 09:42 AM, David Fetter wrote:
In http://archives.postgresql.org/pgsql-hackers/2009-07/msg00245.php
Tom writes:
The main reason contrib still has the alternate method is that PGXS
doesn't really work until after you've installed the core build.
Maybe we could have a look and tr
On Wed, Mar 30, 2011 at 10:32:55AM -0300, Alvaro Herrera wrote:
> Excerpts from Alvaro Herrera's message of mié mar 30 10:27:39 -0300 2011:
> > Excerpts from Dimitri Fontaine's message of mié mar 30 05:27:07 -0300 2011:
>
> > > I'm not sure why we still support the pre-PGXS build recipe in the
> >
Excerpts from Alvaro Herrera's message of mié mar 30 10:27:39 -0300 2011:
> Excerpts from Dimitri Fontaine's message of mié mar 30 05:27:07 -0300 2011:
> > I'm not sure why we still support the pre-PGXS build recipe in the
> > contrib Makefiles, and didn't want to change that as part as the
> > ex
On Wed, Mar 30, 2011 at 10:27:07AM +0200, Dimitri Fontaine wrote:
> I think we should lower the differences between contrib and external
> extensions, so that contrib is only about who maintains the code and
> distribute the extension.
+10 :)
Cheers,
David.
--
David Fetter http://fetter.org/
P
Excerpts from Dimitri Fontaine's message of mié mar 30 05:27:07 -0300 2011:
> Alvaro Herrera writes:
> > Why are you worrying with the non-PGXS build chain anyway? Just assume
> > that the module is going to be built with PGXS and things should just
> > work.
> >
> > We've gone over this a dozen
Alvaro Herrera writes:
> Why are you worrying with the non-PGXS build chain anyway? Just assume
> that the module is going to be built with PGXS and things should just
> work.
>
> We've gone over this a dozen times in the past.
+1
I'm not sure why we still support the pre-PGXS build recipe in t
Excerpts from Joseph Adams's message of mar mar 29 22:15:11 -0300 2011:
> On Tue, Mar 29, 2011 at 4:02 PM, Dimitri Fontaine
> wrote:
> > Here's the ugly trick from ip4r, that's used by more extension:
> >
> > PREFIX_PGVER = $(shell echo $(VERSION) | awk -F. '{ print $$1*100+$$2 }')
>
> Thanks. I
On Tue, Mar 29, 2011 at 4:02 PM, Dimitri Fontaine
wrote:
> Here's the ugly trick from ip4r, that's used by more extension:
>
> PREFIX_PGVER = $(shell echo $(VERSION) | awk -F. '{ print $$1*100+$$2 }')
Thanks. I applied a minor variation of this trick to the JSON module,
so now it builds/installs
On Tue, Mar 29, 2011 at 02:56:52PM -0400, Joseph Adams wrote:
> On Tue, Mar 29, 2011 at 2:42 PM, Dimitri Fontaine
> wrote:
> >> Also, should uninstall_json.sql be named something else, like
> >> json--uninstall--0.1.sql ?
> >
> > You don't need no uninstall script no more, try DROP EXTENSION json;
Joseph Adams writes:
> would be needlessly error-prone. I'm thinking the pg_config utility
> should either make PG_VERSION_NUM (e.g. 90100) available, or it should
> define something indicating the presence of the new extension system.
Here's the ugly trick from ip4r, that's used by more extensi
On Tue, Mar 29, 2011 at 2:42 PM, Dimitri Fontaine
wrote:
> Joseph Adams writes:
>> It would be nice if I could make a Makefile conditional that skips the
>> relocatable test and loads init-pre9.1.sql if the new extension
>> interface isn't available. Is there a Makefile variable or something
>>
On Tue, Mar 29, 2011 at 2:42 PM, Dimitri Fontaine
wrote:
>> Also, should uninstall_json.sql be named something else, like
>> json--uninstall--0.1.sql ?
>
> You don't need no uninstall script no more, try DROP EXTENSION json; and
> DROP EXTENSION json CASCADE;
It's there for pre-9.1, where DROP EX
Joseph Adams writes:
> It would be nice if I could make a Makefile conditional that skips the
> relocatable test and loads init-pre9.1.sql if the new extension
> interface isn't available. Is there a Makefile variable or something
> I can use to do this?
You can use VERSION and MAJORVERSION vari
Joseph Adams writes:
> Done. The new extension interface isn't exactly compatible with the
> old, so I dropped support for PostgreSQL 8.4 from the module. I
> suppose I could maintain a back-ported json module separately.
In fact it is, but there's some history hiding the fact. I'm overdue to
On 3/28/11 10:21 AM, Joseph Adams wrote:
> Currently, there are no functions for converting to/from PostgreSQL
> values or getting/setting sub-values (e.g. JSONPath). However, I did
> adapt the json_stringify function written by Itagaki Takahiro in his
> patch ( http://archives.postgresql.org/pgsq
On Mon, Mar 28, 2011 at 2:03 PM, Joseph Adams
wrote:
> On Mon, Mar 28, 2011 at 1:48 PM, Robert Haas wrote:
>> On Mon, Mar 28, 2011 at 1:21 PM, Joseph Adams
>> wrote:
>>> Attached is a patch that adds a 'json' contrib module. Although we
>>> may want a built-in JSON data type in the near future,
On Mon, Mar 28, 2011 at 1:48 PM, Robert Haas wrote:
> On Mon, Mar 28, 2011 at 1:21 PM, Joseph Adams
> wrote:
>> Attached is a patch that adds a 'json' contrib module. Although we
>> may want a built-in JSON data type in the near future, making it a
>> module (for the time being) has a couple adv
On Mon, Mar 28, 2011 at 1:21 PM, Joseph Adams
wrote:
> Attached is a patch that adds a 'json' contrib module. Although we
> may want a built-in JSON data type in the near future, making it a
> module (for the time being) has a couple advantages:
Is this something you'd hope to get committed at s
67 matches
Mail list logo