On Mon, Sep 15, 2014 at 4:09 PM, Josh Berkus <j...@agliodbs.com> wrote: > On 09/15/2014 10:23 AM, Claudio Freire wrote: >> Now, large small keys could be 200 or 2000, or even 20k. I'd guess >> several should be tested to find the shape of the curve. > > Well, we know that it's not noticeable with 200, and that it is > noticeable with 100K. It's only worth testing further if we think that > having more than 200 top-level keys in one JSONB value is going to be a > use case for more than 0.1% of our users. I personally do not.
Yes, but bear in mind that the worst case is exactly at the use case jsonb was designed to speed up: element access within relatively big json documents. Having them uncompressed is expectable because people using jsonb will often favor speed over compactness if it's a tradeoff (otherwise they'd use plain json). So while you're right that it's perhaps above what would be a common use case, the range "somewhere between 200 and 100K" for the tipping point seems overly imprecise to me. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers