Yes I meant equivalence in the roundtrip conversion sense. And of course the "feature complete" solution which can handle deep structures would be really nice to have.
Best Regards Dan S 2016-02-23 21:11 GMT+01:00 David G. Johnston <david.g.johns...@gmail.com>: > On Tue, Feb 23, 2016 at 12:54 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > >> Dan S <strd...@gmail.com> writes: >> > I have this table, data and query: >> >> > create table test >> > ( >> > id int, >> > txt text, >> > txt_arr text[], >> > f float >> > ); >> >> > insert into test >> > values >> > >> (1,'jkl','{abc,def,fgh}',3.14159),(2,'hij','{abc,def,fgh}',3.14159),(2,null,null,null),(3,'def',null,0); >> >> > select j, json_populate_record(null::test, j) >> > from >> > ( >> > select to_json(t) as j from test t >> > ) r; >> >> > ERROR: malformed array literal: "["abc","def","fgh"]" >> > DETAIL: "[" must introduce explicitly-specified array dimensions. >> >> > Is it a bug or how am I supposed to use the populate function ? >> >> AFAICS, json_populate_record has no intelligence about nested container >> situations. It'll basically just push the JSON text representation of any >> field of the top-level object at the input converter for the corresponding >> composite-type column. That doesn't work if you're trying to convert a >> JSON array to a Postgres array, and it wouldn't work for sub-object to >> composite column either, because of syntax discrepancies. >> >> Ideally this would work for arbitrarily-deeply-nested array+record >> structures, but it looks like a less than trivial amount of work to make >> that happen. >> >> > If I try an equivalent example with hstore it works well. >> >> hstore hasn't got any concept of substructure in its field values, so >> it's hard to see how you'd create an "equivalent" situation. >> > > Equivalent in the "ability to round-trip" sense. Since hstore doesn't > have nested containers internal serialization of a record to hstore is > forced to "stringify" the array which can then be fed back in as-is. But > the [row_]to_json > > logic converts the PostgreSQL arrays to JSON arrays and then we fail to > handle them on the return portion of the trip. > > Arrays are likely to be a much for common scenario but I agree that > dealing with arbitrary depths and objects would make the feature complete. > > And yes, back-patching should only occur (and ideally behavior changing) > for situations that today raise errors - as the example does. > > David J. > > >