On Tue, May 22, 2012 at 10:43 AM, Simon Riggs <si...@2ndquadrant.com> wrote: > On 22 May 2012 18:35, Josh Berkus <j...@agliodbs.com> wrote: >> >>> If I have a customer with 1 database per user, how do they run a query >>> against 100 user tables? It would require 100 connections to the >>> database. Doing that would require roughly x100 the planning time and >>> x100 the connection delay. Larger SQL statements pass their results >>> between executor steps using libpq rather than direct calls. >> >> Why is this hypothetical customer using separate databases? This really >> seems like a case of "doctor, it hurts when I do this". > > Databases are great for separating things, but sometimes you want to > un-separate them in a practical way.
In my experience, these un-separations are (thankfully) relieved of the requirement of consistency between databases, and so the contract is much more favorable. The planning time problem is quite hard. However, I think the connection-delay one is not as hard a one to answer: I think multiplexed protocols are going to become the norm in the near future (they have been a pretty uncontested part of the SPDY protocol, for example, after flow control was added) and have a number of useful properties, and it may be time to consider how we're going to divorce the notion of one socket implies exactly one backend. -- fdr -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers