On Thu, Jun 9, 2011 at 10:34 AM, Alvaro Herrera <alvhe...@commandprompt.com> wrote: > Excerpts from Robert Haas's message of jue jun 09 10:28:39 -0400 2011: >> On Thu, Jun 9, 2011 at 10:22 AM, Alvaro Herrera >> <alvhe...@commandprompt.com> wrote: >> >> 1. Subdivide XLOG insertion into three operations: (1) allocate space >> >> in the log buffer, (2) copy the log records into the allocated space, >> >> and (3) release the space to the buffer manager for eventual write to >> >> disk. AIUI, WALInsertLock currently covers all three phases of this >> >> operation, but phase 2 can proceed in parallel. It's pretty easy to >> >> imagine maintain one pointer that references the next available byte >> >> of log space (let's call this the "next insert" pointer), and a second >> >> pointer that references the byte following the last byte known to be >> >> written (let's call this the "insert done" pointer). >> > >> > I think this can be done more simply if instead of a single "insert >> > done" pointer you have an array of them, one per backend; there's also a >> > global pointer that can be advanced per the minimum of the bunch, which >> > you can calculate with some quick locking of the array. You don't need >> > to sleep at all, except to update the array and calculate the global >> > ptr, so this is probably also faster. >> >> I think looping over an array with one entry per backend is going to >> be intolerably slow... but it's possible I'm wrong. > > Slower than sleeping? Consider that this doesn't need to be done for > each record insertion, only when you need to flush (maybe more than > that, but I think that's the lower limit).
Maybe. I'm worried that if someone jacks up max_connections to 1000 or 5000 or somesuch it could get pretty slow. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers