On Tue, Jan 14, 2014 at 2:17 PM, Robert Haas <robertmh...@gmail.com> wrote: > On Tue, Jan 14, 2014 at 12:15 PM, Claudio Freire <klaussfre...@gmail.com> > wrote: >> On Tue, Jan 14, 2014 at 2:12 PM, Robert Haas <robertmh...@gmail.com> wrote: >>> In terms of avoiding double-buffering, here's my thought after reading >>> what's been written so far. Suppose we read a page into our buffer >>> pool. Until the page is clean, it would be ideal for the mapping to >>> be shared between the buffer cache and our pool, sort of like >>> copy-on-write. That way, if we decide to evict the page, it will >>> still be in the OS cache if we end up needing it again (remember, the >>> OS cache is typically much larger than our buffer pool). But if the >>> page is dirtied, then instead of copying it, just have the buffer pool >>> forget about it, because at that point we know we're going to write >>> the page back out anyway before evicting it. >>> >>> This would be pretty similar to copy-on-write, except without the >>> copying. It would just be forget-from-the-buffer-pool-on-write. >> >> But... either copy-on-write or forget-on-write needs a page fault, and >> thus a page mapping. >> >> Is a page fault more expensive than copying 8k? > > I don't know either. I wasn't thinking so much that it would save CPU > time as that it would save memory. Consider a system with 32GB of > RAM. If you set shared_buffers=8GB, then in the worst case you've got > 25% of your RAM wasted storing pages that already exist, dirtied, in > shared_buffers. It's easy to imagine scenarios in which that results > in lots of extra I/O, so that the CPU required to do the accounting > comes to seem cheap by comparison.
Not necessarily, you pay the CPU cost on each page fault (ie: first write to the buffer at least), each time the page checks into the shared buffers level. It's like a tiered cache. When promoting is expensive, one must be careful. The traffic to/from the L0 (shared buffers) and L1 (page cache) will be considerable, even if everything fits in RAM. I guess it's the constant battle between inclusive and exclusive caches. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers