Pavan Deolasee <pavan.deola...@gmail.com> writes: > What if we remember the buffers as seen by count_nondeletable_pages() and > then just discard those specific buffers instead of scanning the entire > shared_buffers again?
That's an idea. > Surely we revisit all to-be-truncated blocks before > actual truncation. So we already know which buffers to discard. And we're > holding exclusive lock at that point, so nothing can change underneath. Of > course, we can't really remember a large number of buffers, so we can do > this in small chunks. Hm? We're deleting the last N consecutive blocks, so it seems like we just need to think in terms of clearing that range. I think this can just be a local logic change inside DropRelFileNodeBuffers(). You could optimize it fairly easily with some heuristic that compares N to sizeof shared buffers; if it's too large a fraction, the existing implementation will be cheaper than a bunch of hashtable probes. regards, tom lane