On Thu, Jun 9, 2011 at 11:13 AM, Alvaro Herrera <alvhe...@commandprompt.com> wrote: > Excerpts from Robert Haas's message of jue jun 09 10:55:45 -0400 2011: >> On Thu, Jun 9, 2011 at 10:34 AM, Alvaro Herrera >> <alvhe...@commandprompt.com> wrote: > >> > Slower than sleeping? Consider that this doesn't need to be done for >> > each record insertion, only when you need to flush (maybe more than >> > that, but I think that's the lower limit). >> >> Maybe. I'm worried that if someone jacks up max_connections to 1000 >> or 5000 or somesuch it could get pretty slow. > > Well, other things are going to get pretty slow as well, not just this > one, which is why we suggest using a connection pooler with a reasonable > limit. > > On the other hand, maybe those are things we ought to address sometime, > so perhaps we don't want to be designing the old limitation into a new > feature. > > A possibly crazy idea: instead of having a MaxBackends-sized array, how > about some smaller array of insert-done-pointer-updating backends (a > couple dozen or so), and if it's full, the next one has to sleep a bit > until one of them becomes available. We could protect this with a > PGSemaphore having as many counts as items are in the array.
Maybe. It would have to be structured in such a way that you didn't perform a system call in the common case, I think. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers